Feb 01 07:22:05 crc systemd[1]: Starting Kubernetes Kubelet... Feb 01 07:22:05 crc restorecon[4688]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:05 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 01 07:22:06 crc restorecon[4688]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 01 07:22:06 crc restorecon[4688]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 01 07:22:07 crc kubenswrapper[4835]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 01 07:22:07 crc kubenswrapper[4835]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 01 07:22:07 crc kubenswrapper[4835]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 01 07:22:07 crc kubenswrapper[4835]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 01 07:22:07 crc kubenswrapper[4835]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 01 07:22:07 crc kubenswrapper[4835]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.286280 4835 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300572 4835 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300610 4835 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300619 4835 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300628 4835 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300636 4835 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300645 4835 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300653 4835 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300663 4835 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300674 4835 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300685 4835 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300695 4835 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300705 4835 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300715 4835 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300725 4835 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300734 4835 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300743 4835 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300754 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300762 4835 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300771 4835 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300779 4835 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300787 4835 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300795 4835 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300804 4835 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300811 4835 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300818 4835 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300828 4835 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300838 4835 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300846 4835 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300855 4835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300863 4835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300870 4835 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300879 4835 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300888 4835 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300896 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300911 4835 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300919 4835 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300930 4835 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300940 4835 feature_gate.go:330] unrecognized feature gate: Example Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300948 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300955 4835 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300963 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300970 4835 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300979 4835 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300986 4835 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.300994 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.301001 4835 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.301009 4835 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.301017 4835 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.301025 4835 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.301032 4835 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.301040 4835 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.301048 4835 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.301056 4835 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.301063 4835 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.301070 4835 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.301078 4835 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.301085 4835 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.301093 4835 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.301100 4835 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.301108 4835 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.301116 4835 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.301123 4835 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.301131 4835 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.301139 4835 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.301146 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.301154 4835 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.301161 4835 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.301169 4835 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.301177 4835 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.301185 4835 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.301195 4835 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301369 4835 flags.go:64] FLAG: --address="0.0.0.0" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301396 4835 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301447 4835 flags.go:64] FLAG: --anonymous-auth="true" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301467 4835 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301481 4835 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301491 4835 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301503 4835 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301514 4835 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301524 4835 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301533 4835 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301542 4835 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301557 4835 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301566 4835 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301576 4835 flags.go:64] FLAG: --cgroup-root="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301585 4835 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301594 4835 flags.go:64] FLAG: --client-ca-file="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301602 4835 flags.go:64] FLAG: --cloud-config="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301611 4835 flags.go:64] FLAG: --cloud-provider="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301620 4835 flags.go:64] FLAG: --cluster-dns="[]" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301632 4835 flags.go:64] FLAG: --cluster-domain="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301641 4835 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301649 4835 flags.go:64] FLAG: --config-dir="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301658 4835 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301670 4835 flags.go:64] FLAG: --container-log-max-files="5" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301692 4835 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301702 4835 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301711 4835 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301720 4835 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301730 4835 flags.go:64] FLAG: --contention-profiling="false" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301739 4835 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301748 4835 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301757 4835 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301765 4835 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301777 4835 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301787 4835 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301797 4835 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301805 4835 flags.go:64] FLAG: --enable-load-reader="false" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301815 4835 flags.go:64] FLAG: --enable-server="true" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301824 4835 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301835 4835 flags.go:64] FLAG: --event-burst="100" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301846 4835 flags.go:64] FLAG: --event-qps="50" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301856 4835 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301865 4835 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301874 4835 flags.go:64] FLAG: --eviction-hard="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301886 4835 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301894 4835 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301903 4835 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301913 4835 flags.go:64] FLAG: --eviction-soft="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301922 4835 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301931 4835 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301940 4835 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301949 4835 flags.go:64] FLAG: --experimental-mounter-path="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301958 4835 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301967 4835 flags.go:64] FLAG: --fail-swap-on="true" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301976 4835 flags.go:64] FLAG: --feature-gates="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301987 4835 flags.go:64] FLAG: --file-check-frequency="20s" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.301996 4835 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302005 4835 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302014 4835 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302024 4835 flags.go:64] FLAG: --healthz-port="10248" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302033 4835 flags.go:64] FLAG: --help="false" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302042 4835 flags.go:64] FLAG: --hostname-override="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302051 4835 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302060 4835 flags.go:64] FLAG: --http-check-frequency="20s" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302069 4835 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302078 4835 flags.go:64] FLAG: --image-credential-provider-config="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302087 4835 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302096 4835 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302105 4835 flags.go:64] FLAG: --image-service-endpoint="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302114 4835 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302123 4835 flags.go:64] FLAG: --kube-api-burst="100" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302132 4835 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302142 4835 flags.go:64] FLAG: --kube-api-qps="50" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302151 4835 flags.go:64] FLAG: --kube-reserved="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302160 4835 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302170 4835 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302179 4835 flags.go:64] FLAG: --kubelet-cgroups="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302188 4835 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302198 4835 flags.go:64] FLAG: --lock-file="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302207 4835 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302216 4835 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302226 4835 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302239 4835 flags.go:64] FLAG: --log-json-split-stream="false" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302249 4835 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302258 4835 flags.go:64] FLAG: --log-text-split-stream="false" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302267 4835 flags.go:64] FLAG: --logging-format="text" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302276 4835 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302286 4835 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302295 4835 flags.go:64] FLAG: --manifest-url="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302304 4835 flags.go:64] FLAG: --manifest-url-header="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302315 4835 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302325 4835 flags.go:64] FLAG: --max-open-files="1000000" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302335 4835 flags.go:64] FLAG: --max-pods="110" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302344 4835 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302353 4835 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302362 4835 flags.go:64] FLAG: --memory-manager-policy="None" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302371 4835 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302380 4835 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302389 4835 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302400 4835 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302462 4835 flags.go:64] FLAG: --node-status-max-images="50" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302471 4835 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302481 4835 flags.go:64] FLAG: --oom-score-adj="-999" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302490 4835 flags.go:64] FLAG: --pod-cidr="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302499 4835 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302511 4835 flags.go:64] FLAG: --pod-manifest-path="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302520 4835 flags.go:64] FLAG: --pod-max-pids="-1" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302529 4835 flags.go:64] FLAG: --pods-per-core="0" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302538 4835 flags.go:64] FLAG: --port="10250" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302547 4835 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302556 4835 flags.go:64] FLAG: --provider-id="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302565 4835 flags.go:64] FLAG: --qos-reserved="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302574 4835 flags.go:64] FLAG: --read-only-port="10255" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302583 4835 flags.go:64] FLAG: --register-node="true" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302593 4835 flags.go:64] FLAG: --register-schedulable="true" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302602 4835 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302617 4835 flags.go:64] FLAG: --registry-burst="10" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302626 4835 flags.go:64] FLAG: --registry-qps="5" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302635 4835 flags.go:64] FLAG: --reserved-cpus="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302645 4835 flags.go:64] FLAG: --reserved-memory="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302657 4835 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302665 4835 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302675 4835 flags.go:64] FLAG: --rotate-certificates="false" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302684 4835 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302693 4835 flags.go:64] FLAG: --runonce="false" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302701 4835 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302711 4835 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302720 4835 flags.go:64] FLAG: --seccomp-default="false" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302729 4835 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302738 4835 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302747 4835 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302756 4835 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302766 4835 flags.go:64] FLAG: --storage-driver-password="root" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302775 4835 flags.go:64] FLAG: --storage-driver-secure="false" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302943 4835 flags.go:64] FLAG: --storage-driver-table="stats" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302958 4835 flags.go:64] FLAG: --storage-driver-user="root" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302967 4835 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302978 4835 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302988 4835 flags.go:64] FLAG: --system-cgroups="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.302997 4835 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.303012 4835 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.303022 4835 flags.go:64] FLAG: --tls-cert-file="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.303031 4835 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.303051 4835 flags.go:64] FLAG: --tls-min-version="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.303060 4835 flags.go:64] FLAG: --tls-private-key-file="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.303069 4835 flags.go:64] FLAG: --topology-manager-policy="none" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.303079 4835 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.303088 4835 flags.go:64] FLAG: --topology-manager-scope="container" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.303097 4835 flags.go:64] FLAG: --v="2" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.303115 4835 flags.go:64] FLAG: --version="false" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.303125 4835 flags.go:64] FLAG: --vmodule="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.303135 4835 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.303145 4835 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303357 4835 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303367 4835 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303377 4835 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303387 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303396 4835 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303405 4835 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303440 4835 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303449 4835 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303459 4835 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303469 4835 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303479 4835 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303488 4835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303497 4835 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303506 4835 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303514 4835 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303524 4835 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303534 4835 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303543 4835 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303551 4835 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303559 4835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303567 4835 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303575 4835 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303583 4835 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303591 4835 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303599 4835 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303607 4835 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303616 4835 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303624 4835 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303635 4835 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303643 4835 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303652 4835 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303660 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303668 4835 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303676 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303684 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303694 4835 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303704 4835 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303713 4835 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303723 4835 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303731 4835 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303740 4835 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303747 4835 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303756 4835 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303764 4835 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303773 4835 feature_gate.go:330] unrecognized feature gate: Example Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303781 4835 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303789 4835 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303798 4835 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303806 4835 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303814 4835 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303824 4835 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303834 4835 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303844 4835 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303853 4835 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303862 4835 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303871 4835 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303879 4835 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303888 4835 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303896 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303903 4835 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303914 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303923 4835 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303930 4835 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303939 4835 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303946 4835 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303954 4835 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303962 4835 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303971 4835 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303980 4835 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303988 4835 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.303995 4835 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.304008 4835 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.317208 4835 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.317262 4835 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317468 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317500 4835 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317513 4835 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317524 4835 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317535 4835 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317545 4835 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317555 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317565 4835 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317574 4835 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317584 4835 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317617 4835 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317627 4835 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317636 4835 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317650 4835 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317664 4835 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317678 4835 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317693 4835 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317706 4835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317719 4835 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317730 4835 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317741 4835 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317753 4835 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317763 4835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317773 4835 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317784 4835 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317795 4835 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317805 4835 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317814 4835 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317828 4835 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317840 4835 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317852 4835 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317862 4835 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317872 4835 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317881 4835 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317895 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317905 4835 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317916 4835 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317927 4835 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317937 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317949 4835 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317959 4835 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317969 4835 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317979 4835 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317989 4835 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.317999 4835 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318009 4835 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318019 4835 feature_gate.go:330] unrecognized feature gate: Example Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318030 4835 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318040 4835 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318049 4835 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318057 4835 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318065 4835 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318072 4835 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318080 4835 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318088 4835 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318096 4835 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318104 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318113 4835 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318121 4835 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318129 4835 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318136 4835 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318144 4835 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318152 4835 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318159 4835 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318167 4835 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318175 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318183 4835 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318194 4835 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318203 4835 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318212 4835 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318222 4835 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.318235 4835 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318511 4835 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318528 4835 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318539 4835 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318548 4835 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318558 4835 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318566 4835 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318574 4835 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318582 4835 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318590 4835 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318598 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318606 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318613 4835 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318621 4835 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318632 4835 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318642 4835 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318650 4835 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318658 4835 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318666 4835 feature_gate.go:330] unrecognized feature gate: Example Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318673 4835 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318681 4835 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318689 4835 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318697 4835 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318705 4835 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318713 4835 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318721 4835 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318728 4835 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318736 4835 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318744 4835 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318752 4835 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318760 4835 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318767 4835 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318775 4835 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318783 4835 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318790 4835 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318799 4835 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318807 4835 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318816 4835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318823 4835 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318831 4835 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318838 4835 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318846 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318854 4835 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318862 4835 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318869 4835 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318877 4835 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318885 4835 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318893 4835 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318903 4835 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318913 4835 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318921 4835 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318929 4835 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318938 4835 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318950 4835 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318959 4835 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318968 4835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318976 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318984 4835 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.318992 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.319000 4835 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.319008 4835 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.319015 4835 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.319024 4835 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.319033 4835 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.319041 4835 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.319049 4835 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.319057 4835 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.319064 4835 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.319072 4835 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.319081 4835 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.319089 4835 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.319097 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.319109 4835 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.319330 4835 server.go:940] "Client rotation is on, will bootstrap in background" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.326964 4835 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.327084 4835 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.328998 4835 server.go:997] "Starting client certificate rotation" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.329047 4835 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.330040 4835 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-13 08:10:24.656371533 +0000 UTC Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.330138 4835 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.353982 4835 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 01 07:22:07 crc kubenswrapper[4835]: E0201 07:22:07.357081 4835 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.360429 4835 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.382608 4835 log.go:25] "Validated CRI v1 runtime API" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.422478 4835 log.go:25] "Validated CRI v1 image API" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.424995 4835 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.429959 4835 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-01-07-18-30-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.430013 4835 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:44 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.458180 4835 manager.go:217] Machine: {Timestamp:2026-02-01 07:22:07.454501726 +0000 UTC m=+0.574938230 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:83c36967-9ad2-4029-85f1-c31be3b4de3a BootID:9d6ec0e7-f211-4b58-9cdd-b032c4656a66 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:44 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:e0:f3:60 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:e0:f3:60 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:10:c9:fd Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:c8:f3:6f Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:bb:c9:74 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:c0:b5:37 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:b2:80:1f:43:2c:62 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:a2:d5:a6:c2:28:79 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.458598 4835 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.458871 4835 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.461270 4835 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.461621 4835 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.461681 4835 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.462006 4835 topology_manager.go:138] "Creating topology manager with none policy" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.462026 4835 container_manager_linux.go:303] "Creating device plugin manager" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.462593 4835 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.462644 4835 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.462958 4835 state_mem.go:36] "Initialized new in-memory state store" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.463505 4835 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.468069 4835 kubelet.go:418] "Attempting to sync node with API server" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.468114 4835 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.468165 4835 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.468188 4835 kubelet.go:324] "Adding apiserver pod source" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.468209 4835 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.472815 4835 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.473730 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.473744 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Feb 01 07:22:07 crc kubenswrapper[4835]: E0201 07:22:07.473884 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Feb 01 07:22:07 crc kubenswrapper[4835]: E0201 07:22:07.473910 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.473948 4835 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.476462 4835 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.478086 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.478129 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.478144 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.478158 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.478180 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.478206 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.478219 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.478241 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.478257 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.478273 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.478294 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.478309 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.480543 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.481329 4835 server.go:1280] "Started kubelet" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.485311 4835 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.485393 4835 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.486572 4835 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Feb 01 07:22:07 crc systemd[1]: Started Kubernetes Kubelet. Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.486871 4835 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.488259 4835 server.go:460] "Adding debug handlers to kubelet server" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.489741 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.489796 4835 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.490279 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 21:32:46.188746604 +0000 UTC Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.495449 4835 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.495461 4835 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.495515 4835 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.495505 4835 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.495521 4835 factory.go:55] Registering systemd factory Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.495729 4835 factory.go:221] Registration of the systemd container factory successfully Feb 01 07:22:07 crc kubenswrapper[4835]: E0201 07:22:07.495075 4835 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.496536 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Feb 01 07:22:07 crc kubenswrapper[4835]: E0201 07:22:07.496655 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Feb 01 07:22:07 crc kubenswrapper[4835]: E0201 07:22:07.496212 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="200ms" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.497655 4835 factory.go:153] Registering CRI-O factory Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.497692 4835 factory.go:221] Registration of the crio container factory successfully Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.497730 4835 factory.go:103] Registering Raw factory Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.497753 4835 manager.go:1196] Started watching for new ooms in manager Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.499352 4835 manager.go:319] Starting recovery of all containers Feb 01 07:22:07 crc kubenswrapper[4835]: E0201 07:22:07.503892 4835 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.98:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18900e6fefa41331 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-01 07:22:07.481271089 +0000 UTC m=+0.601707583,LastTimestamp:2026-02-01 07:22:07.481271089 +0000 UTC m=+0.601707583,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.509626 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.509704 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.509731 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.509751 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.509772 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.509790 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.509807 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.509829 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.509851 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.509871 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.509888 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.509907 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.509924 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.509946 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.509967 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.509986 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510004 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510022 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510039 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510058 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510075 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510093 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510112 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510130 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510205 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510224 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510247 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510267 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510287 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510306 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510324 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510343 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510361 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510378 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510396 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510448 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510475 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510493 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510510 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510529 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510546 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510565 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510582 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510600 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510620 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510637 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510654 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510672 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510691 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510709 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510726 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510807 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510832 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510853 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510872 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510890 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510909 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510929 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510946 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510965 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.510981 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511000 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511018 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511034 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511052 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511069 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511088 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511105 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511123 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511139 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511156 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511175 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511192 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511209 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511226 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511244 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511261 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511279 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511297 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511319 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511337 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511374 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511393 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511439 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511467 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511484 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511501 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511518 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511535 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511552 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511569 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511585 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511604 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511620 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511637 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511654 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511672 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511723 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511742 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511760 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511777 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511793 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511814 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511832 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511856 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511875 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511896 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511915 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511934 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511953 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511971 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.511996 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.512018 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.512037 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.512055 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.512071 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.512090 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.512106 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.512122 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.512139 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.512156 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.512173 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.512190 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.512206 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.512224 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.512240 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.512256 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.512272 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.512291 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.512311 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.515273 4835 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.515361 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.515477 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.515515 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.515566 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.515600 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.515822 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.515859 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.520691 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.520757 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.520803 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.520834 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.520862 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.520909 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.520938 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.520977 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521006 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521034 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521075 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521105 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521144 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521174 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521206 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521249 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521280 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521321 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521382 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521448 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521491 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521520 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521560 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521591 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521620 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521659 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521689 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521717 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521755 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521786 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521825 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521854 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521882 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521922 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521950 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.521987 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.522016 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.522127 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.522171 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.522202 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.522237 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.522262 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.522290 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.522329 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.522355 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.522463 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.522509 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.522534 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.522804 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.522842 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.522876 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.522932 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.522960 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.522996 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.523024 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.523057 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.523093 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.523149 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.523187 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.523216 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.523252 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.523290 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.523316 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.523344 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.523404 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.523500 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.523536 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.523564 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.523610 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.523644 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.523670 4835 reconstruct.go:97] "Volume reconstruction finished" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.523689 4835 reconciler.go:26] "Reconciler: start to sync state" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.532043 4835 manager.go:324] Recovery completed Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.547138 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.549874 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.550040 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.550177 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.551490 4835 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.551632 4835 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.551751 4835 state_mem.go:36] "Initialized new in-memory state store" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.561912 4835 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.564990 4835 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.565179 4835 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.565364 4835 kubelet.go:2335] "Starting kubelet main sync loop" Feb 01 07:22:07 crc kubenswrapper[4835]: E0201 07:22:07.565603 4835 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 01 07:22:07 crc kubenswrapper[4835]: W0201 07:22:07.567187 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Feb 01 07:22:07 crc kubenswrapper[4835]: E0201 07:22:07.567330 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.580025 4835 policy_none.go:49] "None policy: Start" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.581407 4835 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.581483 4835 state_mem.go:35] "Initializing new in-memory state store" Feb 01 07:22:07 crc kubenswrapper[4835]: E0201 07:22:07.597086 4835 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.628936 4835 manager.go:334] "Starting Device Plugin manager" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.629002 4835 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.629036 4835 server.go:79] "Starting device plugin registration server" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.629696 4835 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.629722 4835 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.630547 4835 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.630716 4835 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.630770 4835 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 01 07:22:07 crc kubenswrapper[4835]: E0201 07:22:07.637490 4835 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.666870 4835 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.667038 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.668514 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.668574 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.668592 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.669012 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.669161 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.669223 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.670599 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.670640 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.670703 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.670722 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.670648 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.670853 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.671046 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.671301 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.671375 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.672741 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.672888 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.672917 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.672920 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.672959 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.672981 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.673213 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.673466 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.673540 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.674969 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.675008 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.675055 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.675027 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.675077 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.675128 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.675327 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.675550 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.675627 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.676733 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.676774 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.676792 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.677003 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.677014 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.677079 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.677047 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.677172 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.678091 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.678136 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.678155 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:07 crc kubenswrapper[4835]: E0201 07:22:07.698169 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="400ms" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.725943 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.726023 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.726062 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.726095 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.726128 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.726159 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.726188 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.726216 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.726245 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.726274 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.726301 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.726327 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.726353 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.726382 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.726630 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.730258 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.732793 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.732850 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.732868 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.732926 4835 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 01 07:22:07 crc kubenswrapper[4835]: E0201 07:22:07.733580 4835 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.98:6443: connect: connection refused" node="crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829233 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829292 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829329 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829359 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829387 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829439 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829469 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829500 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829530 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829559 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829588 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829618 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829647 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829655 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829688 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829675 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829729 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829736 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829677 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829775 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829774 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829776 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829814 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829864 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829884 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829919 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829936 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829948 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.829930 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.830013 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.934001 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.935455 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.935498 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.935515 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:07 crc kubenswrapper[4835]: I0201 07:22:07.935548 4835 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 01 07:22:07 crc kubenswrapper[4835]: E0201 07:22:07.936284 4835 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.98:6443: connect: connection refused" node="crc" Feb 01 07:22:08 crc kubenswrapper[4835]: I0201 07:22:08.000149 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 01 07:22:08 crc kubenswrapper[4835]: I0201 07:22:08.036087 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 01 07:22:08 crc kubenswrapper[4835]: I0201 07:22:08.048371 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:22:08 crc kubenswrapper[4835]: W0201 07:22:08.060646 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-bd290cc12b11d9cb36daafbdc27fa8c8f1eccd6ee47b32451c68ab75336a906f WatchSource:0}: Error finding container bd290cc12b11d9cb36daafbdc27fa8c8f1eccd6ee47b32451c68ab75336a906f: Status 404 returned error can't find the container with id bd290cc12b11d9cb36daafbdc27fa8c8f1eccd6ee47b32451c68ab75336a906f Feb 01 07:22:08 crc kubenswrapper[4835]: W0201 07:22:08.093185 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-0573aa7bca9797a45d57a39e93cc064430185767489ffd8be924515d78a36910 WatchSource:0}: Error finding container 0573aa7bca9797a45d57a39e93cc064430185767489ffd8be924515d78a36910: Status 404 returned error can't find the container with id 0573aa7bca9797a45d57a39e93cc064430185767489ffd8be924515d78a36910 Feb 01 07:22:08 crc kubenswrapper[4835]: I0201 07:22:08.098828 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:22:08 crc kubenswrapper[4835]: E0201 07:22:08.099100 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="800ms" Feb 01 07:22:08 crc kubenswrapper[4835]: I0201 07:22:08.109615 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 01 07:22:08 crc kubenswrapper[4835]: W0201 07:22:08.138712 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-632c55d6cededb3768032b494450bcd36d63fd3667040fe7d53733062dc35e4b WatchSource:0}: Error finding container 632c55d6cededb3768032b494450bcd36d63fd3667040fe7d53733062dc35e4b: Status 404 returned error can't find the container with id 632c55d6cededb3768032b494450bcd36d63fd3667040fe7d53733062dc35e4b Feb 01 07:22:08 crc kubenswrapper[4835]: I0201 07:22:08.336972 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:08 crc kubenswrapper[4835]: I0201 07:22:08.338616 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:08 crc kubenswrapper[4835]: I0201 07:22:08.338670 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:08 crc kubenswrapper[4835]: I0201 07:22:08.338688 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:08 crc kubenswrapper[4835]: I0201 07:22:08.338722 4835 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 01 07:22:08 crc kubenswrapper[4835]: E0201 07:22:08.339298 4835 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.98:6443: connect: connection refused" node="crc" Feb 01 07:22:08 crc kubenswrapper[4835]: W0201 07:22:08.449582 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Feb 01 07:22:08 crc kubenswrapper[4835]: E0201 07:22:08.449701 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Feb 01 07:22:08 crc kubenswrapper[4835]: I0201 07:22:08.488226 4835 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Feb 01 07:22:08 crc kubenswrapper[4835]: I0201 07:22:08.491331 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 04:19:16.901064461 +0000 UTC Feb 01 07:22:08 crc kubenswrapper[4835]: I0201 07:22:08.570647 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0573aa7bca9797a45d57a39e93cc064430185767489ffd8be924515d78a36910"} Feb 01 07:22:08 crc kubenswrapper[4835]: I0201 07:22:08.573152 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"60da8afdd675d2a3e94961eedd9cca20f7be4011ed148830c1c7909ae8f24201"} Feb 01 07:22:08 crc kubenswrapper[4835]: I0201 07:22:08.574855 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"bd290cc12b11d9cb36daafbdc27fa8c8f1eccd6ee47b32451c68ab75336a906f"} Feb 01 07:22:08 crc kubenswrapper[4835]: I0201 07:22:08.575691 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"632c55d6cededb3768032b494450bcd36d63fd3667040fe7d53733062dc35e4b"} Feb 01 07:22:08 crc kubenswrapper[4835]: I0201 07:22:08.577241 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"78031de91880e1eaf467138477c56de99ab73de1f18d3e3264ca0026d3b66a80"} Feb 01 07:22:08 crc kubenswrapper[4835]: W0201 07:22:08.734729 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Feb 01 07:22:08 crc kubenswrapper[4835]: E0201 07:22:08.735219 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Feb 01 07:22:08 crc kubenswrapper[4835]: W0201 07:22:08.785638 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Feb 01 07:22:08 crc kubenswrapper[4835]: E0201 07:22:08.785779 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Feb 01 07:22:08 crc kubenswrapper[4835]: E0201 07:22:08.899881 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="1.6s" Feb 01 07:22:09 crc kubenswrapper[4835]: W0201 07:22:09.067142 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Feb 01 07:22:09 crc kubenswrapper[4835]: E0201 07:22:09.067280 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.139685 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.142123 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.142181 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.142199 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.142234 4835 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 01 07:22:09 crc kubenswrapper[4835]: E0201 07:22:09.142899 4835 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.98:6443: connect: connection refused" node="crc" Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.487390 4835 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.491611 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 04:06:40.341402813 +0000 UTC Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.510259 4835 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 01 07:22:09 crc kubenswrapper[4835]: E0201 07:22:09.511899 4835 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.584134 4835 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253" exitCode=0 Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.584271 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.584433 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253"} Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.585522 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.585565 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.585604 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.589601 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1"} Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.589636 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf"} Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.589650 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976"} Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.593052 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17" exitCode=0 Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.593135 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17"} Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.593305 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.594788 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.594833 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.594858 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.596674 4835 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="084b8ca0d26229f7f9b48abfd0b2c34737b94ba1564e0b9f913d594d2fbdeb13" exitCode=0 Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.596795 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"084b8ca0d26229f7f9b48abfd0b2c34737b94ba1564e0b9f913d594d2fbdeb13"} Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.596892 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.598208 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.599483 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.599510 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.599522 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.599551 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.599585 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.599606 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.600954 4835 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="857b570e7ae7dd450284342c471cf02691b7fa7eb5bd24ad05e6dd0115d1ff2d" exitCode=0 Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.600988 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"857b570e7ae7dd450284342c471cf02691b7fa7eb5bd24ad05e6dd0115d1ff2d"} Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.601052 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.602234 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.602273 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:09 crc kubenswrapper[4835]: I0201 07:22:09.602322 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.487778 4835 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.493528 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 01:33:15.485152871 +0000 UTC Feb 01 07:22:10 crc kubenswrapper[4835]: E0201 07:22:10.500978 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="3.2s" Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.606468 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4"} Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.606510 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94"} Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.606520 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54"} Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.606532 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2"} Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.608867 4835 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="37b3bb2745bd4b232691a2bacf466c147eea6e1068cf4399fd5b46ded7afce49" exitCode=0 Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.608907 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"37b3bb2745bd4b232691a2bacf466c147eea6e1068cf4399fd5b46ded7afce49"} Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.608992 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.609787 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.609810 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.609818 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.611423 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"44751d1619bcacbde4be80603e618132541e8aea35b1bea6e6d8805ac2a35c35"} Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.611491 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.612173 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.612197 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.612207 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.614103 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b3389072313e3af0af04da04d8eb480cbb1611704cb5817a82cc66b8c9d90063"} Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.614132 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.614134 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c0882033ebccd13ec096ebe93d0abb367ea7c2b49ee4571850502dc9959be81f"} Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.614226 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f4c45e8c9e136e58b6b6bb296a7160f5e02b57236f1c2fec30df8628b803df0e"} Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.614769 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.614789 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.614798 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.616567 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8"} Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.616626 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.618309 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.618334 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.618342 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.652611 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.743367 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.744634 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.744662 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.744671 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:10 crc kubenswrapper[4835]: I0201 07:22:10.744690 4835 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 01 07:22:10 crc kubenswrapper[4835]: E0201 07:22:10.745112 4835 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.98:6443: connect: connection refused" node="crc" Feb 01 07:22:11 crc kubenswrapper[4835]: I0201 07:22:11.494594 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 14:38:37.022169316 +0000 UTC Feb 01 07:22:11 crc kubenswrapper[4835]: I0201 07:22:11.625198 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88"} Feb 01 07:22:11 crc kubenswrapper[4835]: I0201 07:22:11.625270 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:11 crc kubenswrapper[4835]: I0201 07:22:11.626736 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:11 crc kubenswrapper[4835]: I0201 07:22:11.626790 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:11 crc kubenswrapper[4835]: I0201 07:22:11.626807 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:11 crc kubenswrapper[4835]: I0201 07:22:11.628946 4835 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4f420acbcdf8ac32ffbc7f6545be0e96c7e9630fd8285c50cda7cf636deb7769" exitCode=0 Feb 01 07:22:11 crc kubenswrapper[4835]: I0201 07:22:11.629084 4835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 01 07:22:11 crc kubenswrapper[4835]: I0201 07:22:11.629137 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:11 crc kubenswrapper[4835]: I0201 07:22:11.629151 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:11 crc kubenswrapper[4835]: I0201 07:22:11.629369 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4f420acbcdf8ac32ffbc7f6545be0e96c7e9630fd8285c50cda7cf636deb7769"} Feb 01 07:22:11 crc kubenswrapper[4835]: I0201 07:22:11.629656 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:11 crc kubenswrapper[4835]: I0201 07:22:11.630193 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:11 crc kubenswrapper[4835]: I0201 07:22:11.630557 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:11 crc kubenswrapper[4835]: I0201 07:22:11.630602 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:11 crc kubenswrapper[4835]: I0201 07:22:11.630620 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:11 crc kubenswrapper[4835]: I0201 07:22:11.630649 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:11 crc kubenswrapper[4835]: I0201 07:22:11.630677 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:11 crc kubenswrapper[4835]: I0201 07:22:11.630693 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:11 crc kubenswrapper[4835]: I0201 07:22:11.631798 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:11 crc kubenswrapper[4835]: I0201 07:22:11.631834 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:11 crc kubenswrapper[4835]: I0201 07:22:11.631852 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:11 crc kubenswrapper[4835]: I0201 07:22:11.632766 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:11 crc kubenswrapper[4835]: I0201 07:22:11.632813 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:11 crc kubenswrapper[4835]: I0201 07:22:11.632832 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:12 crc kubenswrapper[4835]: I0201 07:22:12.495190 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 18:49:43.415358828 +0000 UTC Feb 01 07:22:12 crc kubenswrapper[4835]: I0201 07:22:12.635062 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4f76a95142c00257f569b0db87094f23435274cbe36740d658bac63c26a55233"} Feb 01 07:22:12 crc kubenswrapper[4835]: I0201 07:22:12.635130 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"51a4c738f66e1428697d199630cc541f018b1aa36edcb0e3e3ad32ddab2b5586"} Feb 01 07:22:12 crc kubenswrapper[4835]: I0201 07:22:12.635156 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"64bfb072019b8c1917e27199bbb7b1491df307cb14257e4cd502f3062a674890"} Feb 01 07:22:12 crc kubenswrapper[4835]: I0201 07:22:12.635179 4835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 01 07:22:12 crc kubenswrapper[4835]: I0201 07:22:12.635229 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:12 crc kubenswrapper[4835]: I0201 07:22:12.635236 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:12 crc kubenswrapper[4835]: I0201 07:22:12.636357 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:12 crc kubenswrapper[4835]: I0201 07:22:12.636492 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:12 crc kubenswrapper[4835]: I0201 07:22:12.636382 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:12 crc kubenswrapper[4835]: I0201 07:22:12.636548 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:12 crc kubenswrapper[4835]: I0201 07:22:12.636571 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:12 crc kubenswrapper[4835]: I0201 07:22:12.637055 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:13 crc kubenswrapper[4835]: I0201 07:22:13.373497 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:22:13 crc kubenswrapper[4835]: I0201 07:22:13.375743 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:22:13 crc kubenswrapper[4835]: I0201 07:22:13.389819 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:22:13 crc kubenswrapper[4835]: I0201 07:22:13.495715 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 22:50:07.391701543 +0000 UTC Feb 01 07:22:13 crc kubenswrapper[4835]: I0201 07:22:13.644070 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8444f60530510645c3592013a63e5a5b3cdf6872788309d94d5a18fe1553a937"} Feb 01 07:22:13 crc kubenswrapper[4835]: I0201 07:22:13.644141 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:13 crc kubenswrapper[4835]: I0201 07:22:13.644177 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:13 crc kubenswrapper[4835]: I0201 07:22:13.644142 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"64accb3c02d2092922d2534d7c21dd160d0ed2b2ff1cbc19870174f818ba4486"} Feb 01 07:22:13 crc kubenswrapper[4835]: I0201 07:22:13.645668 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:13 crc kubenswrapper[4835]: I0201 07:22:13.645722 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:13 crc kubenswrapper[4835]: I0201 07:22:13.645740 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:13 crc kubenswrapper[4835]: I0201 07:22:13.645679 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:13 crc kubenswrapper[4835]: I0201 07:22:13.645789 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:13 crc kubenswrapper[4835]: I0201 07:22:13.645808 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:13 crc kubenswrapper[4835]: I0201 07:22:13.820326 4835 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 01 07:22:13 crc kubenswrapper[4835]: I0201 07:22:13.853264 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 01 07:22:13 crc kubenswrapper[4835]: I0201 07:22:13.853509 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:13 crc kubenswrapper[4835]: I0201 07:22:13.854574 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:13 crc kubenswrapper[4835]: I0201 07:22:13.854615 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:13 crc kubenswrapper[4835]: I0201 07:22:13.854625 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:13 crc kubenswrapper[4835]: I0201 07:22:13.945295 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:13 crc kubenswrapper[4835]: I0201 07:22:13.946691 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:13 crc kubenswrapper[4835]: I0201 07:22:13.946715 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:13 crc kubenswrapper[4835]: I0201 07:22:13.946724 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:13 crc kubenswrapper[4835]: I0201 07:22:13.946770 4835 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 01 07:22:14 crc kubenswrapper[4835]: I0201 07:22:14.496301 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 16:30:21.202606854 +0000 UTC Feb 01 07:22:14 crc kubenswrapper[4835]: I0201 07:22:14.646561 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:14 crc kubenswrapper[4835]: I0201 07:22:14.646587 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:14 crc kubenswrapper[4835]: I0201 07:22:14.648389 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:14 crc kubenswrapper[4835]: I0201 07:22:14.648587 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:14 crc kubenswrapper[4835]: I0201 07:22:14.648707 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:14 crc kubenswrapper[4835]: I0201 07:22:14.649805 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:14 crc kubenswrapper[4835]: I0201 07:22:14.650006 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:14 crc kubenswrapper[4835]: I0201 07:22:14.650148 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:14 crc kubenswrapper[4835]: I0201 07:22:14.900331 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:22:14 crc kubenswrapper[4835]: I0201 07:22:14.900570 4835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 01 07:22:14 crc kubenswrapper[4835]: I0201 07:22:14.900617 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:14 crc kubenswrapper[4835]: I0201 07:22:14.901963 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:14 crc kubenswrapper[4835]: I0201 07:22:14.902010 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:14 crc kubenswrapper[4835]: I0201 07:22:14.902030 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:15 crc kubenswrapper[4835]: I0201 07:22:15.252264 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:22:15 crc kubenswrapper[4835]: I0201 07:22:15.470581 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 01 07:22:15 crc kubenswrapper[4835]: I0201 07:22:15.496583 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 01:55:58.250913194 +0000 UTC Feb 01 07:22:15 crc kubenswrapper[4835]: I0201 07:22:15.649113 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:15 crc kubenswrapper[4835]: I0201 07:22:15.649206 4835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 01 07:22:15 crc kubenswrapper[4835]: I0201 07:22:15.649292 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:15 crc kubenswrapper[4835]: I0201 07:22:15.650677 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:15 crc kubenswrapper[4835]: I0201 07:22:15.650757 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:15 crc kubenswrapper[4835]: I0201 07:22:15.650782 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:15 crc kubenswrapper[4835]: I0201 07:22:15.650921 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:15 crc kubenswrapper[4835]: I0201 07:22:15.650998 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:15 crc kubenswrapper[4835]: I0201 07:22:15.651024 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:16 crc kubenswrapper[4835]: I0201 07:22:16.313204 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:22:16 crc kubenswrapper[4835]: I0201 07:22:16.496742 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 07:38:19.638167198 +0000 UTC Feb 01 07:22:16 crc kubenswrapper[4835]: I0201 07:22:16.652612 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:16 crc kubenswrapper[4835]: I0201 07:22:16.653649 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:16 crc kubenswrapper[4835]: I0201 07:22:16.653691 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:16 crc kubenswrapper[4835]: I0201 07:22:16.653701 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:17 crc kubenswrapper[4835]: I0201 07:22:17.497157 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 00:18:38.668537359 +0000 UTC Feb 01 07:22:17 crc kubenswrapper[4835]: E0201 07:22:17.637639 4835 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 01 07:22:18 crc kubenswrapper[4835]: I0201 07:22:18.427983 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 01 07:22:18 crc kubenswrapper[4835]: I0201 07:22:18.428394 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:18 crc kubenswrapper[4835]: I0201 07:22:18.429318 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:18 crc kubenswrapper[4835]: I0201 07:22:18.429359 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:18 crc kubenswrapper[4835]: I0201 07:22:18.429376 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:18 crc kubenswrapper[4835]: I0201 07:22:18.497482 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 07:42:03.856183323 +0000 UTC Feb 01 07:22:19 crc kubenswrapper[4835]: I0201 07:22:19.497869 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 19:38:12.825206789 +0000 UTC Feb 01 07:22:19 crc kubenswrapper[4835]: I0201 07:22:19.758938 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:22:19 crc kubenswrapper[4835]: I0201 07:22:19.759283 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:19 crc kubenswrapper[4835]: I0201 07:22:19.761087 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:19 crc kubenswrapper[4835]: I0201 07:22:19.761159 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:19 crc kubenswrapper[4835]: I0201 07:22:19.761177 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:19 crc kubenswrapper[4835]: I0201 07:22:19.766604 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:22:20 crc kubenswrapper[4835]: I0201 07:22:20.498477 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 01:06:24.146066261 +0000 UTC Feb 01 07:22:20 crc kubenswrapper[4835]: I0201 07:22:20.663771 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:20 crc kubenswrapper[4835]: I0201 07:22:20.664741 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:20 crc kubenswrapper[4835]: I0201 07:22:20.664771 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:20 crc kubenswrapper[4835]: I0201 07:22:20.664781 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:21 crc kubenswrapper[4835]: W0201 07:22:21.033698 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 01 07:22:21 crc kubenswrapper[4835]: I0201 07:22:21.033827 4835 trace.go:236] Trace[2072267801]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Feb-2026 07:22:11.032) (total time: 10001ms): Feb 01 07:22:21 crc kubenswrapper[4835]: Trace[2072267801]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:22:21.033) Feb 01 07:22:21 crc kubenswrapper[4835]: Trace[2072267801]: [10.001529541s] [10.001529541s] END Feb 01 07:22:21 crc kubenswrapper[4835]: E0201 07:22:21.033863 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 01 07:22:21 crc kubenswrapper[4835]: I0201 07:22:21.489582 4835 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 01 07:22:21 crc kubenswrapper[4835]: I0201 07:22:21.499447 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 17:33:33.338540968 +0000 UTC Feb 01 07:22:21 crc kubenswrapper[4835]: W0201 07:22:21.542814 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 01 07:22:21 crc kubenswrapper[4835]: I0201 07:22:21.542899 4835 trace.go:236] Trace[260053569]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Feb-2026 07:22:11.541) (total time: 10001ms): Feb 01 07:22:21 crc kubenswrapper[4835]: Trace[260053569]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:22:21.542) Feb 01 07:22:21 crc kubenswrapper[4835]: Trace[260053569]: [10.001111045s] [10.001111045s] END Feb 01 07:22:21 crc kubenswrapper[4835]: E0201 07:22:21.542918 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 01 07:22:21 crc kubenswrapper[4835]: I0201 07:22:21.675527 4835 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 01 07:22:21 crc kubenswrapper[4835]: I0201 07:22:21.675677 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 01 07:22:21 crc kubenswrapper[4835]: I0201 07:22:21.681143 4835 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 01 07:22:21 crc kubenswrapper[4835]: I0201 07:22:21.681233 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 01 07:22:22 crc kubenswrapper[4835]: I0201 07:22:22.499584 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 22:42:44.873776974 +0000 UTC Feb 01 07:22:22 crc kubenswrapper[4835]: I0201 07:22:22.760916 4835 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 01 07:22:22 crc kubenswrapper[4835]: I0201 07:22:22.760989 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 01 07:22:23 crc kubenswrapper[4835]: I0201 07:22:23.499680 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 18:51:41.361720971 +0000 UTC Feb 01 07:22:24 crc kubenswrapper[4835]: I0201 07:22:24.500017 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 18:21:11.314178942 +0000 UTC Feb 01 07:22:25 crc kubenswrapper[4835]: I0201 07:22:25.261255 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:22:25 crc kubenswrapper[4835]: I0201 07:22:25.261590 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:25 crc kubenswrapper[4835]: I0201 07:22:25.263327 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:25 crc kubenswrapper[4835]: I0201 07:22:25.263382 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:25 crc kubenswrapper[4835]: I0201 07:22:25.263444 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:25 crc kubenswrapper[4835]: I0201 07:22:25.268406 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:22:25 crc kubenswrapper[4835]: I0201 07:22:25.500912 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 03:44:41.982013331 +0000 UTC Feb 01 07:22:25 crc kubenswrapper[4835]: I0201 07:22:25.510549 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 01 07:22:25 crc kubenswrapper[4835]: I0201 07:22:25.510778 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:25 crc kubenswrapper[4835]: I0201 07:22:25.512441 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:25 crc kubenswrapper[4835]: I0201 07:22:25.512527 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:25 crc kubenswrapper[4835]: I0201 07:22:25.512555 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:25 crc kubenswrapper[4835]: I0201 07:22:25.534860 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 01 07:22:25 crc kubenswrapper[4835]: I0201 07:22:25.554349 4835 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 01 07:22:25 crc kubenswrapper[4835]: I0201 07:22:25.677941 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:25 crc kubenswrapper[4835]: I0201 07:22:25.678048 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:25 crc kubenswrapper[4835]: I0201 07:22:25.679920 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:25 crc kubenswrapper[4835]: I0201 07:22:25.679966 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:25 crc kubenswrapper[4835]: I0201 07:22:25.679984 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:25 crc kubenswrapper[4835]: I0201 07:22:25.680074 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:25 crc kubenswrapper[4835]: I0201 07:22:25.680171 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:25 crc kubenswrapper[4835]: I0201 07:22:25.680198 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:26 crc kubenswrapper[4835]: I0201 07:22:26.501392 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 13:26:38.783742335 +0000 UTC Feb 01 07:22:26 crc kubenswrapper[4835]: E0201 07:22:26.678453 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 01 07:22:26 crc kubenswrapper[4835]: I0201 07:22:26.683010 4835 trace.go:236] Trace[202954038]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Feb-2026 07:22:11.806) (total time: 14876ms): Feb 01 07:22:26 crc kubenswrapper[4835]: Trace[202954038]: ---"Objects listed" error: 14876ms (07:22:26.682) Feb 01 07:22:26 crc kubenswrapper[4835]: Trace[202954038]: [14.87684526s] [14.87684526s] END Feb 01 07:22:26 crc kubenswrapper[4835]: I0201 07:22:26.683040 4835 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 01 07:22:26 crc kubenswrapper[4835]: I0201 07:22:26.683057 4835 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 01 07:22:26 crc kubenswrapper[4835]: I0201 07:22:26.685118 4835 trace.go:236] Trace[1147390178]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Feb-2026 07:22:11.820) (total time: 14864ms): Feb 01 07:22:26 crc kubenswrapper[4835]: Trace[1147390178]: ---"Objects listed" error: 14864ms (07:22:26.684) Feb 01 07:22:26 crc kubenswrapper[4835]: Trace[1147390178]: [14.86421575s] [14.86421575s] END Feb 01 07:22:26 crc kubenswrapper[4835]: I0201 07:22:26.685170 4835 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 01 07:22:26 crc kubenswrapper[4835]: E0201 07:22:26.686201 4835 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 01 07:22:26 crc kubenswrapper[4835]: I0201 07:22:26.689641 4835 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 01 07:22:26 crc kubenswrapper[4835]: I0201 07:22:26.726169 4835 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55990->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 01 07:22:26 crc kubenswrapper[4835]: I0201 07:22:26.726244 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55990->192.168.126.11:17697: read: connection reset by peer" Feb 01 07:22:26 crc kubenswrapper[4835]: I0201 07:22:26.726607 4835 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 01 07:22:26 crc kubenswrapper[4835]: I0201 07:22:26.726654 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 01 07:22:26 crc kubenswrapper[4835]: I0201 07:22:26.726909 4835 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 01 07:22:26 crc kubenswrapper[4835]: I0201 07:22:26.726940 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.476710 4835 apiserver.go:52] "Watching apiserver" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.480885 4835 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.481296 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.481843 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.482056 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.482103 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.482248 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.482272 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.482370 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.482433 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.482520 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.482588 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.487519 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.491481 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.491688 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.491801 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.492002 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.492250 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.492629 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.493062 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.493313 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.497779 4835 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.502358 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 23:30:46.872176828 +0000 UTC Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.536333 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.553164 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.576084 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.588178 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.588235 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.589523 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.589960 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590011 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590042 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590073 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590110 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590132 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590152 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590173 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590160 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590197 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590332 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590388 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590570 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590634 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590682 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.592008 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.593638 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.593709 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.593764 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.590583 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.591225 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.591701 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.591782 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.592123 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.592164 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.592619 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.592767 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.593029 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.593110 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.593611 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.593762 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.593820 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594020 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594060 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594093 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594126 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594157 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594189 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594495 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594495 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594528 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594568 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594599 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594629 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594659 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594689 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594723 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594758 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594791 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594821 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594852 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594881 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594913 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594943 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.594973 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595019 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595050 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595108 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595141 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595140 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595176 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595225 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595270 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595311 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595343 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595378 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595447 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595480 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595511 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595544 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595576 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595618 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595669 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595717 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595749 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595765 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595781 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.595912 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596211 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596279 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596333 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596385 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596445 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596476 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596530 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596583 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596634 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596683 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596688 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596735 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596793 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596844 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596893 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596944 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.596997 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597049 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597104 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597155 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597204 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597255 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597305 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597356 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597403 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597504 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597564 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597616 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597743 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597815 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597878 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597928 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597960 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.597979 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.598033 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.598085 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.598139 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.598188 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.598235 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.598285 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.598462 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.598478 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.598540 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.599284 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:22:28.099255386 +0000 UTC m=+21.219691830 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.600456 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.600504 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.600511 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.600536 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.600646 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.600914 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.600932 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.601323 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.601468 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.601545 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.601806 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.601818 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.602159 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.602223 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.602457 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.602550 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.602618 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.602659 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.602680 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.602724 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.602739 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.602815 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.602942 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.602946 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.602990 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603031 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603064 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603129 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603163 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603193 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603223 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603254 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603284 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603317 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603322 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603349 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603402 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603473 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603537 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603681 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603696 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603743 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603800 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603819 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603869 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.603930 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604002 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604054 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604094 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604106 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604265 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604314 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604318 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604342 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604354 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604506 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604558 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604594 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604629 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604663 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604697 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604730 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604764 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604789 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604801 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604870 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.604907 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605035 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605090 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605142 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605193 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605255 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605303 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605348 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605397 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605512 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605560 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605609 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605655 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605702 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605753 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605801 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605846 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605893 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605941 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.605991 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.606042 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.606092 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.606141 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.606195 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.606250 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.606305 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.606365 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.606787 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.606851 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.606906 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.606964 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607015 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607070 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607160 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607230 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607286 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607338 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607389 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607479 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607520 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607555 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607591 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607627 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607660 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607696 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607734 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607806 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607853 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607880 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607909 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.607961 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608002 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608040 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608077 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608114 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608126 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608153 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608206 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608256 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608337 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608393 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608556 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608583 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608627 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608770 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608806 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608838 4835 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608870 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608902 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608914 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608934 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608968 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609001 4835 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609031 4835 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609063 4835 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609093 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609123 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609153 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609183 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609215 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609253 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609285 4835 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609317 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609348 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609377 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609407 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609470 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609501 4835 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609532 4835 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609561 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609589 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609619 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609647 4835 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609678 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609708 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609739 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609769 4835 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609800 4835 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609832 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609864 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609895 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609924 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609951 4835 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609983 4835 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.610018 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.610048 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.610079 4835 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.610111 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.608928 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.610119 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609276 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609665 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.610181 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.610239 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.609774 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.611984 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.612041 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.612160 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.612184 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.612784 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.613015 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.612981 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.613556 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.613695 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.614779 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.614827 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.614924 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.615706 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.615853 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.615916 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.616209 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.616277 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.616499 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.616531 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.616727 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.616944 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.617053 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.617094 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.617196 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.617216 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.617479 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.617558 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.617737 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.617768 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.618083 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.619009 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.619213 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.619314 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:28.119275799 +0000 UTC m=+21.239712273 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.619792 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.619819 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.619840 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.620704 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.620840 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.620939 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.621059 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.622790 4835 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.622982 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.624270 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.624514 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.624506 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.624650 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.624922 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.625190 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.612003 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.625821 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.625852 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.625900 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.626251 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.626328 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.626354 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.626381 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.626392 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.626384 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.626729 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.626871 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.627190 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.627512 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.628243 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.628922 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.629055 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.629609 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.630124 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.630756 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.630961 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.631126 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.631187 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.631239 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.631660 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.631720 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.631855 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.631912 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.632400 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.632966 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.633131 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.633261 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.636239 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.636274 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.636810 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.636944 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:28.136904323 +0000 UTC m=+21.257340807 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.637088 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.637879 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.638262 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.638668 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.638780 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.639037 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.639294 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.639042 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.639511 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.639571 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.642042 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.644599 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.646953 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.649014 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.651329 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.656681 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.657335 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.657858 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.657887 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.657902 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.657969 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:28.157946752 +0000 UTC m=+21.278383196 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.658240 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.658390 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.658467 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.658482 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:27 crc kubenswrapper[4835]: E0201 07:22:27.658518 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:28.158507436 +0000 UTC m=+21.278943880 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.659196 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.663133 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.663530 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.663755 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.663848 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.665547 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.667864 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.669586 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.669737 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.669754 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.669860 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.670099 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.670237 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.670507 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.671296 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.674850 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.674999 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.675083 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.675284 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.675141 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.675649 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.675728 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.675953 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.679554 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.679745 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.679795 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.680246 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.680340 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.680461 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.680493 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.680569 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.681019 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.681144 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.681891 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.681955 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.682040 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.682287 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.682383 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.683078 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.683188 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.683230 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.683477 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.683534 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.683714 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.683948 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.683954 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.683998 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.685623 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.685632 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.685786 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.684640 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.685928 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.686439 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.686528 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.692926 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.694157 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.697563 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88" exitCode=255 Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.697628 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88"} Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.707306 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.712603 4835 scope.go:117] "RemoveContainer" containerID="39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.712664 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.712741 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.712824 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.712845 4835 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.712860 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.712872 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.712884 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.712896 4835 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.712908 4835 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.712922 4835 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713065 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713109 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713139 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713157 4835 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713171 4835 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713186 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713200 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713214 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713225 4835 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713237 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713248 4835 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713260 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713271 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713282 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713295 4835 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713306 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713318 4835 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713329 4835 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713341 4835 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713352 4835 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713364 4835 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713375 4835 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713387 4835 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713400 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713443 4835 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713458 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713472 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713488 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713502 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713517 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713530 4835 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713543 4835 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713560 4835 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713636 4835 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713681 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713714 4835 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713742 4835 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713772 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713801 4835 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713828 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713855 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713883 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713903 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713910 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.713987 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714003 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714042 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714055 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714233 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714272 4835 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714299 4835 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714326 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714352 4835 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714379 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714404 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714475 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714503 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714527 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714551 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714577 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714603 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714630 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714656 4835 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714684 4835 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714709 4835 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714737 4835 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714765 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714794 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714822 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714850 4835 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714877 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714904 4835 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714931 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714957 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.714984 4835 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715010 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715037 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715062 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715088 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715113 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715173 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715203 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715228 4835 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715258 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715286 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715311 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715337 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715367 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715394 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715553 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715572 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715584 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715596 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715608 4835 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715619 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715631 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715645 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715656 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715669 4835 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715680 4835 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715693 4835 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715704 4835 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715716 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715727 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715739 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715752 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715764 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715775 4835 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715787 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715798 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715810 4835 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715823 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715835 4835 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715847 4835 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715858 4835 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715869 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715881 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715894 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715905 4835 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715917 4835 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715928 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715940 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715957 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715970 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715981 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.715992 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716003 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716014 4835 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716026 4835 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716037 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716048 4835 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716059 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716071 4835 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716084 4835 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716097 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716109 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716120 4835 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716131 4835 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716160 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716174 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716188 4835 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716201 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.716213 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.725240 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.732809 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.732932 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.745847 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.746347 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.757358 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.767709 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.778611 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.788457 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.798588 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.805923 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.810129 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.816780 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.816823 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.816841 4835 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 01 07:22:27 crc kubenswrapper[4835]: W0201 07:22:27.819834 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-b86ae587f6a2b13900dfb26eb7b6e37e3713336862a41e4a8a2f1668adef115e WatchSource:0}: Error finding container b86ae587f6a2b13900dfb26eb7b6e37e3713336862a41e4a8a2f1668adef115e: Status 404 returned error can't find the container with id b86ae587f6a2b13900dfb26eb7b6e37e3713336862a41e4a8a2f1668adef115e Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.820578 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.824075 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.832069 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.832257 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 01 07:22:27 crc kubenswrapper[4835]: W0201 07:22:27.843555 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-62ac44151f8c113bdf04964eeb0f36fed31f8c3623d257efc91d87091d9f904e WatchSource:0}: Error finding container 62ac44151f8c113bdf04964eeb0f36fed31f8c3623d257efc91d87091d9f904e: Status 404 returned error can't find the container with id 62ac44151f8c113bdf04964eeb0f36fed31f8c3623d257efc91d87091d9f904e Feb 01 07:22:27 crc kubenswrapper[4835]: W0201 07:22:27.853099 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-71e5ee8280fe8e643b0d25617f14af9cf60893b7a85f2b7e4a2b07b02bc2f9b5 WatchSource:0}: Error finding container 71e5ee8280fe8e643b0d25617f14af9cf60893b7a85f2b7e4a2b07b02bc2f9b5: Status 404 returned error can't find the container with id 71e5ee8280fe8e643b0d25617f14af9cf60893b7a85f2b7e4a2b07b02bc2f9b5 Feb 01 07:22:27 crc kubenswrapper[4835]: I0201 07:22:27.934965 4835 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.119171 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.119321 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:22:29.119307377 +0000 UTC m=+22.239743811 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.220345 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.220533 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.220618 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.220638 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.220666 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.220682 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.220704 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.220779 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:29.220751758 +0000 UTC m=+22.341188222 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.220799 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.220903 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:29.220874021 +0000 UTC m=+22.341310495 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.220903 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.220800 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.220976 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.220999 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.221037 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:29.221006235 +0000 UTC m=+22.341442709 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.221074 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:29.221059946 +0000 UTC m=+22.341496410 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.502788 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 05:41:07.879752202 +0000 UTC Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.566456 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:28 crc kubenswrapper[4835]: E0201 07:22:28.566571 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.701206 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"62ac44151f8c113bdf04964eeb0f36fed31f8c3623d257efc91d87091d9f904e"} Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.703463 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292"} Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.703493 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4"} Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.703508 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b86ae587f6a2b13900dfb26eb7b6e37e3713336862a41e4a8a2f1668adef115e"} Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.705595 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.707346 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9"} Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.707630 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.708792 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c"} Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.708827 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"71e5ee8280fe8e643b0d25617f14af9cf60893b7a85f2b7e4a2b07b02bc2f9b5"} Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.720696 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.731269 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.744246 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:28Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.760449 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:28Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.782950 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:28Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.799347 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:28Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.816821 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:28Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.834740 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:28Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.852533 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:28Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.871403 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:28Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.884662 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:28Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.898057 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:28Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.914988 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:28Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:28 crc kubenswrapper[4835]: I0201 07:22:28.928789 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:28Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.129751 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.130023 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:22:31.129983529 +0000 UTC m=+24.250419993 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.230597 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.230662 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.230696 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.230735 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.230831 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.230869 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.230903 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.230912 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:31.230891107 +0000 UTC m=+24.351327561 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.230915 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.230925 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.230965 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:31.230954719 +0000 UTC m=+24.351391163 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.230868 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.231031 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.231038 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:31.2310113 +0000 UTC m=+24.351447764 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.231051 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.231137 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:31.231107272 +0000 UTC m=+24.351543736 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.503216 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 08:25:41.328745264 +0000 UTC Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.566147 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.566328 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.566491 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:29 crc kubenswrapper[4835]: E0201 07:22:29.566670 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.572479 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.573824 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.576019 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.577240 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.579392 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.580749 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.582022 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.585089 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.586532 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.588828 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.590066 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.592254 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.592934 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.593750 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.594939 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.595621 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.596998 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.597672 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.598533 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.599981 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.600757 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.602110 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.602797 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.604329 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.605018 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.605856 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.607373 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.608050 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.609373 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.610354 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.611662 4835 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.611847 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.614240 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.615398 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.615989 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.618280 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.619300 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.620710 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.621728 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.623529 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.624218 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.625896 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.626900 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.628683 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.629135 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.630046 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.630694 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.631829 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.632295 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.633103 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.633640 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.634518 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.635393 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.636123 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.763531 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.769333 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.773288 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.780189 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.794097 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.811080 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.829524 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.853569 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.871769 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.888860 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.906551 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.925886 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.943374 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.956389 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.968855 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.980923 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:29 crc kubenswrapper[4835]: I0201 07:22:29.997307 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:29Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:30 crc kubenswrapper[4835]: I0201 07:22:30.009264 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:30Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:30 crc kubenswrapper[4835]: I0201 07:22:30.503973 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 04:22:28.180010113 +0000 UTC Feb 01 07:22:30 crc kubenswrapper[4835]: I0201 07:22:30.565783 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:30 crc kubenswrapper[4835]: E0201 07:22:30.565971 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:30 crc kubenswrapper[4835]: E0201 07:22:30.728531 4835 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.147887 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.148183 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:22:35.148137922 +0000 UTC m=+28.268574406 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.249478 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.249555 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.249600 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.249636 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.249694 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.249759 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.249783 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.249820 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.249839 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:35.249801399 +0000 UTC m=+28.370237863 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.249839 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.249872 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:35.24985886 +0000 UTC m=+28.370295334 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.249912 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:35.249888701 +0000 UTC m=+28.370325175 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.249943 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.249999 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.250022 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.250130 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:35.250103956 +0000 UTC m=+28.370540420 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.504629 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 14:14:57.17944135 +0000 UTC Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.566324 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.566397 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.566488 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:31 crc kubenswrapper[4835]: E0201 07:22:31.566577 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.718340 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de"} Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.736858 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:31Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.750672 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:31Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.766773 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:31Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.786775 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:31Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.803248 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:31Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.817877 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:31Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.840229 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:31Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:31 crc kubenswrapper[4835]: I0201 07:22:31.856214 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:31Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.357830 4835 csr.go:261] certificate signing request csr-c5d78 is approved, waiting to be issued Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.373801 4835 csr.go:257] certificate signing request csr-c5d78 is issued Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.488032 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-d8kfl"] Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.488304 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-d8kfl" Feb 01 07:22:32 crc kubenswrapper[4835]: W0201 07:22:32.490999 4835 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Feb 01 07:22:32 crc kubenswrapper[4835]: E0201 07:22:32.491036 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 01 07:22:32 crc kubenswrapper[4835]: W0201 07:22:32.491077 4835 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Feb 01 07:22:32 crc kubenswrapper[4835]: E0201 07:22:32.491090 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 01 07:22:32 crc kubenswrapper[4835]: W0201 07:22:32.492310 4835 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: secrets "node-resolver-dockercfg-kz9s7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Feb 01 07:22:32 crc kubenswrapper[4835]: E0201 07:22:32.492338 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-resolver-dockercfg-kz9s7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.504941 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 16:42:15.67744549 +0000 UTC Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.506828 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:32Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.532276 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:32Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.549456 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:32Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.559268 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tp8v\" (UniqueName: \"kubernetes.io/projected/0c6d0e64-7406-4a2b-8006-8381549b35e6-kube-api-access-6tp8v\") pod \"node-resolver-d8kfl\" (UID: \"0c6d0e64-7406-4a2b-8006-8381549b35e6\") " pod="openshift-dns/node-resolver-d8kfl" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.559309 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0c6d0e64-7406-4a2b-8006-8381549b35e6-hosts-file\") pod \"node-resolver-d8kfl\" (UID: \"0c6d0e64-7406-4a2b-8006-8381549b35e6\") " pod="openshift-dns/node-resolver-d8kfl" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.566601 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:32 crc kubenswrapper[4835]: E0201 07:22:32.566699 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.568390 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:32Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.589310 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:32Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.613607 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:32Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.628748 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:32Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.642043 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:32Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.654267 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:32Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.660070 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0c6d0e64-7406-4a2b-8006-8381549b35e6-hosts-file\") pod \"node-resolver-d8kfl\" (UID: \"0c6d0e64-7406-4a2b-8006-8381549b35e6\") " pod="openshift-dns/node-resolver-d8kfl" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.659898 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0c6d0e64-7406-4a2b-8006-8381549b35e6-hosts-file\") pod \"node-resolver-d8kfl\" (UID: \"0c6d0e64-7406-4a2b-8006-8381549b35e6\") " pod="openshift-dns/node-resolver-d8kfl" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.660733 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tp8v\" (UniqueName: \"kubernetes.io/projected/0c6d0e64-7406-4a2b-8006-8381549b35e6-kube-api-access-6tp8v\") pod \"node-resolver-d8kfl\" (UID: \"0c6d0e64-7406-4a2b-8006-8381549b35e6\") " pod="openshift-dns/node-resolver-d8kfl" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.966629 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-25s9j"] Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.966905 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-25s9j" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.967438 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-qtzjl"] Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.968403 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.969595 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.970577 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-wdt78"] Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.971479 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.973442 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.973586 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.982135 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.982388 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.982707 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.982866 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 01 07:22:32 crc kubenswrapper[4835]: W0201 07:22:32.982991 4835 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 01 07:22:32 crc kubenswrapper[4835]: E0201 07:22:32.983039 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 01 07:22:32 crc kubenswrapper[4835]: W0201 07:22:32.983089 4835 reflector.go:561] object-"openshift-machine-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 01 07:22:32 crc kubenswrapper[4835]: E0201 07:22:32.983102 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 01 07:22:32 crc kubenswrapper[4835]: W0201 07:22:32.983143 4835 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: secrets "proxy-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 01 07:22:32 crc kubenswrapper[4835]: E0201 07:22:32.983156 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"proxy-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 01 07:22:32 crc kubenswrapper[4835]: W0201 07:22:32.983194 4835 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": failed to list *v1.Secret: secrets "machine-config-daemon-dockercfg-r5tcq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Feb 01 07:22:32 crc kubenswrapper[4835]: E0201 07:22:32.983208 4835 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-r5tcq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-config-daemon-dockercfg-r5tcq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.986296 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 01 07:22:32 crc kubenswrapper[4835]: I0201 07:22:32.993311 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:32Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.019016 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.047622 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063632 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-run-multus-certs\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063672 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-multus-daemon-config\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063690 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-run-netns\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063717 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/303c450e-4b2d-4908-84e6-df8b444ed640-mcd-auth-proxy-config\") pod \"machine-config-daemon-wdt78\" (UID: \"303c450e-4b2d-4908-84e6-df8b444ed640\") " pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063734 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpvhf\" (UniqueName: \"kubernetes.io/projected/303c450e-4b2d-4908-84e6-df8b444ed640-kube-api-access-jpvhf\") pod \"machine-config-daemon-wdt78\" (UID: \"303c450e-4b2d-4908-84e6-df8b444ed640\") " pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063749 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/00cf5926-f943-44c0-a351-db83ab17c2a1-os-release\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063763 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-cni-binary-copy\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063777 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/00cf5926-f943-44c0-a351-db83ab17c2a1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063797 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/303c450e-4b2d-4908-84e6-df8b444ed640-rootfs\") pod \"machine-config-daemon-wdt78\" (UID: \"303c450e-4b2d-4908-84e6-df8b444ed640\") " pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063811 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-etc-kubernetes\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063827 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/00cf5926-f943-44c0-a351-db83ab17c2a1-cni-binary-copy\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063842 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-multus-cni-dir\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063858 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-multus-socket-dir-parent\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063871 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-hostroot\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063894 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/00cf5926-f943-44c0-a351-db83ab17c2a1-system-cni-dir\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063911 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/00cf5926-f943-44c0-a351-db83ab17c2a1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063955 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-var-lib-cni-bin\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063972 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-run-k8s-cni-cncf-io\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.063987 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-var-lib-kubelet\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.064001 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-multus-conf-dir\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.064018 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksb2t\" (UniqueName: \"kubernetes.io/projected/00cf5926-f943-44c0-a351-db83ab17c2a1-kube-api-access-ksb2t\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.064032 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-os-release\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.064046 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-var-lib-cni-multus\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.064065 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/303c450e-4b2d-4908-84e6-df8b444ed640-proxy-tls\") pod \"machine-config-daemon-wdt78\" (UID: \"303c450e-4b2d-4908-84e6-df8b444ed640\") " pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.064079 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-system-cni-dir\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.064093 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/00cf5926-f943-44c0-a351-db83ab17c2a1-cnibin\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.064107 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-cnibin\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.064127 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwv4d\" (UniqueName: \"kubernetes.io/projected/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-kube-api-access-qwv4d\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.081552 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.086672 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.088759 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.088793 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.088803 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.088915 4835 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.109952 4835 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.110247 4835 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.111209 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.111251 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.111260 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.111274 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.111283 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:33Z","lastTransitionTime":"2026-02-01T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.111908 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.150555 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.164923 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/00cf5926-f943-44c0-a351-db83ab17c2a1-os-release\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.164970 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-run-netns\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165011 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/303c450e-4b2d-4908-84e6-df8b444ed640-mcd-auth-proxy-config\") pod \"machine-config-daemon-wdt78\" (UID: \"303c450e-4b2d-4908-84e6-df8b444ed640\") " pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165036 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpvhf\" (UniqueName: \"kubernetes.io/projected/303c450e-4b2d-4908-84e6-df8b444ed640-kube-api-access-jpvhf\") pod \"machine-config-daemon-wdt78\" (UID: \"303c450e-4b2d-4908-84e6-df8b444ed640\") " pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165111 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/00cf5926-f943-44c0-a351-db83ab17c2a1-os-release\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165230 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-run-netns\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165068 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-cni-binary-copy\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165536 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/00cf5926-f943-44c0-a351-db83ab17c2a1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165675 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/303c450e-4b2d-4908-84e6-df8b444ed640-rootfs\") pod \"machine-config-daemon-wdt78\" (UID: \"303c450e-4b2d-4908-84e6-df8b444ed640\") " pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165718 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/303c450e-4b2d-4908-84e6-df8b444ed640-mcd-auth-proxy-config\") pod \"machine-config-daemon-wdt78\" (UID: \"303c450e-4b2d-4908-84e6-df8b444ed640\") " pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165750 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-cni-binary-copy\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165571 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/303c450e-4b2d-4908-84e6-df8b444ed640-rootfs\") pod \"machine-config-daemon-wdt78\" (UID: \"303c450e-4b2d-4908-84e6-df8b444ed640\") " pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165801 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-etc-kubernetes\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165825 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/00cf5926-f943-44c0-a351-db83ab17c2a1-cni-binary-copy\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165845 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-multus-cni-dir\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165861 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-multus-socket-dir-parent\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165883 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/00cf5926-f943-44c0-a351-db83ab17c2a1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165903 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-hostroot\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165862 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-etc-kubernetes\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165960 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/00cf5926-f943-44c0-a351-db83ab17c2a1-system-cni-dir\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.165938 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/00cf5926-f943-44c0-a351-db83ab17c2a1-system-cni-dir\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166004 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-multus-socket-dir-parent\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166013 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-multus-cni-dir\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166015 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-var-lib-cni-bin\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166044 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-hostroot\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166047 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-var-lib-cni-bin\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166053 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-multus-conf-dir\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166079 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-multus-conf-dir\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166111 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-run-k8s-cni-cncf-io\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166131 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-var-lib-kubelet\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166155 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksb2t\" (UniqueName: \"kubernetes.io/projected/00cf5926-f943-44c0-a351-db83ab17c2a1-kube-api-access-ksb2t\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166179 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-os-release\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166196 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-var-lib-cni-multus\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166213 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-var-lib-kubelet\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166229 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/303c450e-4b2d-4908-84e6-df8b444ed640-proxy-tls\") pod \"machine-config-daemon-wdt78\" (UID: \"303c450e-4b2d-4908-84e6-df8b444ed640\") " pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166247 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-system-cni-dir\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166271 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/00cf5926-f943-44c0-a351-db83ab17c2a1-cnibin\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166283 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-os-release\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166283 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/00cf5926-f943-44c0-a351-db83ab17c2a1-cni-binary-copy\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166293 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-cnibin\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166312 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwv4d\" (UniqueName: \"kubernetes.io/projected/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-kube-api-access-qwv4d\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166322 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-system-cni-dir\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166330 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-var-lib-cni-multus\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166338 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-run-multus-certs\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166267 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/00cf5926-f943-44c0-a351-db83ab17c2a1-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166343 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/00cf5926-f943-44c0-a351-db83ab17c2a1-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166364 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-multus-daemon-config\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166369 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/00cf5926-f943-44c0-a351-db83ab17c2a1-cnibin\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166247 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-run-k8s-cni-cncf-io\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166438 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-host-run-multus-certs\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166446 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-cnibin\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.166903 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-multus-daemon-config\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.170389 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: E0201 07:22:33.172748 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.176696 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.176725 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.176733 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.176746 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.176756 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:33Z","lastTransitionTime":"2026-02-01T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.192608 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwv4d\" (UniqueName: \"kubernetes.io/projected/c9342eb7-b5ae-47b2-a56d-91ae886e5f0e-kube-api-access-qwv4d\") pod \"multus-25s9j\" (UID: \"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\") " pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.198685 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: E0201 07:22:33.198742 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.198833 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksb2t\" (UniqueName: \"kubernetes.io/projected/00cf5926-f943-44c0-a351-db83ab17c2a1-kube-api-access-ksb2t\") pod \"multus-additional-cni-plugins-qtzjl\" (UID: \"00cf5926-f943-44c0-a351-db83ab17c2a1\") " pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.204161 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.204199 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.204208 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.204222 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.204231 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:33Z","lastTransitionTime":"2026-02-01T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.211185 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: E0201 07:22:33.215830 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.219068 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.219130 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.219140 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.219153 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.219162 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:33Z","lastTransitionTime":"2026-02-01T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.226237 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: E0201 07:22:33.229714 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.234976 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.235015 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.235024 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.235039 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.235056 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:33Z","lastTransitionTime":"2026-02-01T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.242071 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: E0201 07:22:33.250969 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: E0201 07:22:33.251106 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.252738 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.252774 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.252784 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.252799 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.252809 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:33Z","lastTransitionTime":"2026-02-01T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.259582 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.269172 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.281045 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.283435 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-25s9j" Feb 01 07:22:33 crc kubenswrapper[4835]: W0201 07:22:33.295265 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9342eb7_b5ae_47b2_a56d_91ae886e5f0e.slice/crio-75288a7694f79f9f3fd591cd8ac9e443b8f1884c02c2dbc728156263a8802025 WatchSource:0}: Error finding container 75288a7694f79f9f3fd591cd8ac9e443b8f1884c02c2dbc728156263a8802025: Status 404 returned error can't find the container with id 75288a7694f79f9f3fd591cd8ac9e443b8f1884c02c2dbc728156263a8802025 Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.295614 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.300299 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.316987 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.330720 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.344274 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.357996 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.358037 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.358049 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.358066 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.358078 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:33Z","lastTransitionTime":"2026-02-01T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.358081 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.366798 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5z5dl"] Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.367951 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.370535 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.371053 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.371369 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.371488 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.371534 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.371557 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.372790 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.372907 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.374150 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.375150 4835 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-01 07:17:32 +0000 UTC, rotation deadline is 2026-12-21 10:06:42.778514578 +0000 UTC Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.375254 4835 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7754h44m9.403264278s for next certificate rotation Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.384368 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.399351 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.412229 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.424259 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.436249 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.454562 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.462926 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.462980 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.462998 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.463020 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.463038 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:33Z","lastTransitionTime":"2026-02-01T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469044 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-openvswitch\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469088 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-run-ovn-kubernetes\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469113 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-etc-openvswitch\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469139 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x78ft\" (UniqueName: \"kubernetes.io/projected/bd62f19b-07ab-4cc5-84a3-2f097c278de7-kube-api-access-x78ft\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469181 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovn-node-metrics-cert\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469204 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-systemd\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469234 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-slash\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469257 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-cni-bin\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469279 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovnkube-script-lib\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469314 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469348 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-ovn\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469375 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-kubelet\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469397 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-run-netns\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469454 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-log-socket\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469489 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-node-log\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469509 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovnkube-config\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469534 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-cni-netd\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469554 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-env-overrides\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469578 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-systemd-units\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.469601 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-var-lib-openvswitch\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.478602 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.506375 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 17:17:49.336092164 +0000 UTC Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.506492 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.510156 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.522517 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.548231 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.560425 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.565239 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.565278 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.565289 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.565306 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.565318 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:33Z","lastTransitionTime":"2026-02-01T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.565770 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:33 crc kubenswrapper[4835]: E0201 07:22:33.565873 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.565902 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:33 crc kubenswrapper[4835]: E0201 07:22:33.566006 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570017 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-systemd\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570064 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-slash\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570098 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-cni-bin\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570128 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-systemd\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570195 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-cni-bin\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570141 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-slash\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570125 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovnkube-script-lib\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570343 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570388 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-ovn\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570428 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-kubelet\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570447 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-run-netns\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570465 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-log-socket\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570497 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-node-log\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570513 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovnkube-config\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570508 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-ovn\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570538 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-cni-netd\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570541 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-run-netns\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570577 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-cni-netd\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570613 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-kubelet\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570619 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-node-log\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570651 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-env-overrides\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570683 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-systemd-units\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570700 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-var-lib-openvswitch\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570735 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-openvswitch\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570758 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-run-ovn-kubernetes\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570787 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-etc-openvswitch\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570811 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x78ft\" (UniqueName: \"kubernetes.io/projected/bd62f19b-07ab-4cc5-84a3-2f097c278de7-kube-api-access-x78ft\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570857 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-systemd-units\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570898 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-run-ovn-kubernetes\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570857 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-var-lib-openvswitch\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570903 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovnkube-script-lib\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570937 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-openvswitch\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570923 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-etc-openvswitch\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570861 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovn-node-metrics-cert\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.570632 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-log-socket\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.571296 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-env-overrides\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.571379 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.571433 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovnkube-config\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.574364 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.576829 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovn-node-metrics-cert\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.589373 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.591502 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x78ft\" (UniqueName: \"kubernetes.io/projected/bd62f19b-07ab-4cc5-84a3-2f097c278de7-kube-api-access-x78ft\") pod \"ovnkube-node-5z5dl\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.605530 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.617113 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.668051 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.668092 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.668101 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.668116 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.668127 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:33Z","lastTransitionTime":"2026-02-01T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:33 crc kubenswrapper[4835]: E0201 07:22:33.678090 4835 projected.go:288] Couldn't get configMap openshift-dns/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 01 07:22:33 crc kubenswrapper[4835]: E0201 07:22:33.678138 4835 projected.go:194] Error preparing data for projected volume kube-api-access-6tp8v for pod openshift-dns/node-resolver-d8kfl: failed to sync configmap cache: timed out waiting for the condition Feb 01 07:22:33 crc kubenswrapper[4835]: E0201 07:22:33.678186 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0c6d0e64-7406-4a2b-8006-8381549b35e6-kube-api-access-6tp8v podName:0c6d0e64-7406-4a2b-8006-8381549b35e6 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:34.17816963 +0000 UTC m=+27.298606054 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6tp8v" (UniqueName: "kubernetes.io/projected/0c6d0e64-7406-4a2b-8006-8381549b35e6-kube-api-access-6tp8v") pod "node-resolver-d8kfl" (UID: "0c6d0e64-7406-4a2b-8006-8381549b35e6") : failed to sync configmap cache: timed out waiting for the condition Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.680380 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.687593 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 01 07:22:33 crc kubenswrapper[4835]: W0201 07:22:33.693998 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd62f19b_07ab_4cc5_84a3_2f097c278de7.slice/crio-f2c33318aecd4d2a27c36deae504704dd76ecedc9768925c3ee036665f4c99e8 WatchSource:0}: Error finding container f2c33318aecd4d2a27c36deae504704dd76ecedc9768925c3ee036665f4c99e8: Status 404 returned error can't find the container with id f2c33318aecd4d2a27c36deae504704dd76ecedc9768925c3ee036665f4c99e8 Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.728610 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerStarted","Data":"f2c33318aecd4d2a27c36deae504704dd76ecedc9768925c3ee036665f4c99e8"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.730700 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" event={"ID":"00cf5926-f943-44c0-a351-db83ab17c2a1","Type":"ContainerStarted","Data":"3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.730787 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" event={"ID":"00cf5926-f943-44c0-a351-db83ab17c2a1","Type":"ContainerStarted","Data":"6e426f333048639d95a80b52286ac07a23c058dc4a44f49da8cb6d15b2530297"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.732296 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-25s9j" event={"ID":"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e","Type":"ContainerStarted","Data":"213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.732350 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-25s9j" event={"ID":"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e","Type":"ContainerStarted","Data":"75288a7694f79f9f3fd591cd8ac9e443b8f1884c02c2dbc728156263a8802025"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.753461 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.772582 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.772753 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.772776 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.772800 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.772818 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:33Z","lastTransitionTime":"2026-02-01T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.778674 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.795521 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.813370 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.840014 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.841293 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.850545 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.860864 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/303c450e-4b2d-4908-84e6-df8b444ed640-proxy-tls\") pod \"machine-config-daemon-wdt78\" (UID: \"303c450e-4b2d-4908-84e6-df8b444ed640\") " pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.874888 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.874930 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.874941 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.874958 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.874971 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:33Z","lastTransitionTime":"2026-02-01T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.876719 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.887827 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.897974 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.910146 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.920271 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.921861 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.933381 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.947465 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.958238 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.972331 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.977656 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.977694 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.977703 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.977719 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.977730 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:33Z","lastTransitionTime":"2026-02-01T07:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:33 crc kubenswrapper[4835]: I0201 07:22:33.995083 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:33Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.009359 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.022504 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.038618 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.061090 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.074960 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.079993 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.080044 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.080059 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.080082 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.080098 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:34Z","lastTransitionTime":"2026-02-01T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.093453 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.107268 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.126075 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.147358 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.165701 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: E0201 07:22:34.182652 4835 projected.go:288] Couldn't get configMap openshift-machine-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.182667 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: E0201 07:22:34.182709 4835 projected.go:194] Error preparing data for projected volume kube-api-access-jpvhf for pod openshift-machine-config-operator/machine-config-daemon-wdt78: failed to sync configmap cache: timed out waiting for the condition Feb 01 07:22:34 crc kubenswrapper[4835]: E0201 07:22:34.182770 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/303c450e-4b2d-4908-84e6-df8b444ed640-kube-api-access-jpvhf podName:303c450e-4b2d-4908-84e6-df8b444ed640 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:34.682749462 +0000 UTC m=+27.803185906 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jpvhf" (UniqueName: "kubernetes.io/projected/303c450e-4b2d-4908-84e6-df8b444ed640-kube-api-access-jpvhf") pod "machine-config-daemon-wdt78" (UID: "303c450e-4b2d-4908-84e6-df8b444ed640") : failed to sync configmap cache: timed out waiting for the condition Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.182784 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.182837 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.182850 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.182873 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.182890 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:34Z","lastTransitionTime":"2026-02-01T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.276809 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tp8v\" (UniqueName: \"kubernetes.io/projected/0c6d0e64-7406-4a2b-8006-8381549b35e6-kube-api-access-6tp8v\") pod \"node-resolver-d8kfl\" (UID: \"0c6d0e64-7406-4a2b-8006-8381549b35e6\") " pod="openshift-dns/node-resolver-d8kfl" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.282289 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tp8v\" (UniqueName: \"kubernetes.io/projected/0c6d0e64-7406-4a2b-8006-8381549b35e6-kube-api-access-6tp8v\") pod \"node-resolver-d8kfl\" (UID: \"0c6d0e64-7406-4a2b-8006-8381549b35e6\") " pod="openshift-dns/node-resolver-d8kfl" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.285367 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.285458 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.285476 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.285501 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.285520 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:34Z","lastTransitionTime":"2026-02-01T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.301743 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-d8kfl" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.304180 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 01 07:22:34 crc kubenswrapper[4835]: W0201 07:22:34.315308 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c6d0e64_7406_4a2b_8006_8381549b35e6.slice/crio-d81811d48a9723c81dce6c75e322f5295875e52a0045f988a1cf94fb861eb255 WatchSource:0}: Error finding container d81811d48a9723c81dce6c75e322f5295875e52a0045f988a1cf94fb861eb255: Status 404 returned error can't find the container with id d81811d48a9723c81dce6c75e322f5295875e52a0045f988a1cf94fb861eb255 Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.387568 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.387605 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.387638 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.388717 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.388767 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:34Z","lastTransitionTime":"2026-02-01T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.491966 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.492007 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.492046 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.492067 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.492078 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:34Z","lastTransitionTime":"2026-02-01T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.507530 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 09:23:47.639314025 +0000 UTC Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.566875 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:34 crc kubenswrapper[4835]: E0201 07:22:34.567086 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.594475 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.594517 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.594533 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.594554 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.594568 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:34Z","lastTransitionTime":"2026-02-01T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.700726 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.700763 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.700773 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.700788 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.700799 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:34Z","lastTransitionTime":"2026-02-01T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.736720 4835 generic.go:334] "Generic (PLEG): container finished" podID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerID="b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764" exitCode=0 Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.736788 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764"} Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.740224 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-d8kfl" event={"ID":"0c6d0e64-7406-4a2b-8006-8381549b35e6","Type":"ContainerStarted","Data":"e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb"} Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.740340 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-d8kfl" event={"ID":"0c6d0e64-7406-4a2b-8006-8381549b35e6","Type":"ContainerStarted","Data":"d81811d48a9723c81dce6c75e322f5295875e52a0045f988a1cf94fb861eb255"} Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.742092 4835 generic.go:334] "Generic (PLEG): container finished" podID="00cf5926-f943-44c0-a351-db83ab17c2a1" containerID="3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2" exitCode=0 Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.742137 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" event={"ID":"00cf5926-f943-44c0-a351-db83ab17c2a1","Type":"ContainerDied","Data":"3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2"} Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.756246 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.772257 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.784029 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpvhf\" (UniqueName: \"kubernetes.io/projected/303c450e-4b2d-4908-84e6-df8b444ed640-kube-api-access-jpvhf\") pod \"machine-config-daemon-wdt78\" (UID: \"303c450e-4b2d-4908-84e6-df8b444ed640\") " pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.790110 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpvhf\" (UniqueName: \"kubernetes.io/projected/303c450e-4b2d-4908-84e6-df8b444ed640-kube-api-access-jpvhf\") pod \"machine-config-daemon-wdt78\" (UID: \"303c450e-4b2d-4908-84e6-df8b444ed640\") " pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.791435 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.801865 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.802835 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.802881 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.802898 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.802921 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.802936 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:34Z","lastTransitionTime":"2026-02-01T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.811183 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: W0201 07:22:34.819601 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod303c450e_4b2d_4908_84e6_df8b444ed640.slice/crio-36357963120e59ecd5d22213f9bd0b316d664159db2bb8a508d29de03da3fb3a WatchSource:0}: Error finding container 36357963120e59ecd5d22213f9bd0b316d664159db2bb8a508d29de03da3fb3a: Status 404 returned error can't find the container with id 36357963120e59ecd5d22213f9bd0b316d664159db2bb8a508d29de03da3fb3a Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.824550 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.843539 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.858820 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.880926 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.899074 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.904834 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.904870 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.904882 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.904899 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.904911 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:34Z","lastTransitionTime":"2026-02-01T07:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.916894 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.933904 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.952163 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.981200 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:34 crc kubenswrapper[4835]: I0201 07:22:34.997654 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:34Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.006537 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.006577 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.006590 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.006611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.006625 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:35Z","lastTransitionTime":"2026-02-01T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.008615 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.020677 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.032310 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.049391 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.067956 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.082960 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.097812 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.109057 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.109101 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.109115 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.109135 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.109147 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:35Z","lastTransitionTime":"2026-02-01T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.120050 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.139307 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.158086 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.173629 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.189112 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.189374 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:22:43.189338941 +0000 UTC m=+36.309775405 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.190769 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.211972 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.212016 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.212027 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.212044 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.212056 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:35Z","lastTransitionTime":"2026-02-01T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.233095 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-l7rwg"] Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.233498 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-l7rwg" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.235251 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.235397 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.235425 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.235534 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.247917 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.263950 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.289926 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/96856bc5-b4b0-4268-8868-65a584408ca7-serviceca\") pod \"node-ca-l7rwg\" (UID: \"96856bc5-b4b0-4268-8868-65a584408ca7\") " pod="openshift-image-registry/node-ca-l7rwg" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.289978 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.290017 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.290056 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.290086 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.290119 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2t5v\" (UniqueName: \"kubernetes.io/projected/96856bc5-b4b0-4268-8868-65a584408ca7-kube-api-access-d2t5v\") pod \"node-ca-l7rwg\" (UID: \"96856bc5-b4b0-4268-8868-65a584408ca7\") " pod="openshift-image-registry/node-ca-l7rwg" Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.290142 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.290219 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:43.290198178 +0000 UTC m=+36.410634712 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.290153 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96856bc5-b4b0-4268-8868-65a584408ca7-host\") pod \"node-ca-l7rwg\" (UID: \"96856bc5-b4b0-4268-8868-65a584408ca7\") " pod="openshift-image-registry/node-ca-l7rwg" Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.290235 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.290307 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:43.290297741 +0000 UTC m=+36.410734285 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.290322 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.290339 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.290352 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.290391 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:43.290377053 +0000 UTC m=+36.410813497 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.290492 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.290505 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.290514 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.290544 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:43.290535387 +0000 UTC m=+36.410971831 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.306543 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.314235 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.314304 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.314315 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.314329 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.314339 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:35Z","lastTransitionTime":"2026-02-01T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.339694 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.379501 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.390922 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/96856bc5-b4b0-4268-8868-65a584408ca7-serviceca\") pod \"node-ca-l7rwg\" (UID: \"96856bc5-b4b0-4268-8868-65a584408ca7\") " pod="openshift-image-registry/node-ca-l7rwg" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.390998 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2t5v\" (UniqueName: \"kubernetes.io/projected/96856bc5-b4b0-4268-8868-65a584408ca7-kube-api-access-d2t5v\") pod \"node-ca-l7rwg\" (UID: \"96856bc5-b4b0-4268-8868-65a584408ca7\") " pod="openshift-image-registry/node-ca-l7rwg" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.391022 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96856bc5-b4b0-4268-8868-65a584408ca7-host\") pod \"node-ca-l7rwg\" (UID: \"96856bc5-b4b0-4268-8868-65a584408ca7\") " pod="openshift-image-registry/node-ca-l7rwg" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.391067 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96856bc5-b4b0-4268-8868-65a584408ca7-host\") pod \"node-ca-l7rwg\" (UID: \"96856bc5-b4b0-4268-8868-65a584408ca7\") " pod="openshift-image-registry/node-ca-l7rwg" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.391784 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/96856bc5-b4b0-4268-8868-65a584408ca7-serviceca\") pod \"node-ca-l7rwg\" (UID: \"96856bc5-b4b0-4268-8868-65a584408ca7\") " pod="openshift-image-registry/node-ca-l7rwg" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.416250 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.416288 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.416297 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.416312 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.416321 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:35Z","lastTransitionTime":"2026-02-01T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.420635 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.460400 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2t5v\" (UniqueName: \"kubernetes.io/projected/96856bc5-b4b0-4268-8868-65a584408ca7-kube-api-access-d2t5v\") pod \"node-ca-l7rwg\" (UID: \"96856bc5-b4b0-4268-8868-65a584408ca7\") " pod="openshift-image-registry/node-ca-l7rwg" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.486790 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.508272 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 16:45:30.214005401 +0000 UTC Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.519186 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.519215 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.519225 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.519240 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.519252 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:35Z","lastTransitionTime":"2026-02-01T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.529158 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.551683 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-l7rwg" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.565884 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.565995 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.566046 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:35 crc kubenswrapper[4835]: E0201 07:22:35.566164 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:35 crc kubenswrapper[4835]: W0201 07:22:35.566653 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96856bc5_b4b0_4268_8868_65a584408ca7.slice/crio-1fb0b190274bb3d2a6c6fe8d462824c4fa7dc16841981476b9bb5cb8d0687ef1 WatchSource:0}: Error finding container 1fb0b190274bb3d2a6c6fe8d462824c4fa7dc16841981476b9bb5cb8d0687ef1: Status 404 returned error can't find the container with id 1fb0b190274bb3d2a6c6fe8d462824c4fa7dc16841981476b9bb5cb8d0687ef1 Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.587281 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.611960 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.623636 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.623670 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.623679 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.623692 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.623700 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:35Z","lastTransitionTime":"2026-02-01T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.639109 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.679102 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.715530 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.725368 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.725403 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.725428 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.725441 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.725450 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:35Z","lastTransitionTime":"2026-02-01T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.745500 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-l7rwg" event={"ID":"96856bc5-b4b0-4268-8868-65a584408ca7","Type":"ContainerStarted","Data":"1fb0b190274bb3d2a6c6fe8d462824c4fa7dc16841981476b9bb5cb8d0687ef1"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.746845 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.746901 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.746915 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"36357963120e59ecd5d22213f9bd0b316d664159db2bb8a508d29de03da3fb3a"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.749433 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerStarted","Data":"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.749534 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerStarted","Data":"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.749591 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerStarted","Data":"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.749651 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerStarted","Data":"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.749726 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerStarted","Data":"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.749782 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerStarted","Data":"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.750726 4835 generic.go:334] "Generic (PLEG): container finished" podID="00cf5926-f943-44c0-a351-db83ab17c2a1" containerID="747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585" exitCode=0 Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.750770 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" event={"ID":"00cf5926-f943-44c0-a351-db83ab17c2a1","Type":"ContainerDied","Data":"747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.759070 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.795306 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.831855 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.831895 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.831906 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.831919 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.831927 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:35Z","lastTransitionTime":"2026-02-01T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.838834 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.879793 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.915637 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.934571 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.934609 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.934620 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.934640 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.934653 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:35Z","lastTransitionTime":"2026-02-01T07:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:35 crc kubenswrapper[4835]: I0201 07:22:35.960728 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.002712 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.037890 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.038193 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.038211 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.038237 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.038254 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:36Z","lastTransitionTime":"2026-02-01T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.038656 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.080316 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.129780 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.140746 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.140852 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.140880 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.140914 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.140952 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:36Z","lastTransitionTime":"2026-02-01T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.155726 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.205329 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.239540 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.243658 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.243681 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.243688 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.243701 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.243709 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:36Z","lastTransitionTime":"2026-02-01T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.293628 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.323730 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.345783 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.345807 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.345815 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.345828 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.345836 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:36Z","lastTransitionTime":"2026-02-01T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.447984 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.448031 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.448042 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.448058 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.448070 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:36Z","lastTransitionTime":"2026-02-01T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.509160 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 19:41:33.310402716 +0000 UTC Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.550687 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.550714 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.550724 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.550742 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.550755 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:36Z","lastTransitionTime":"2026-02-01T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.566688 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:36 crc kubenswrapper[4835]: E0201 07:22:36.566862 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.653758 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.653823 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.653841 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.653868 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.653892 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:36Z","lastTransitionTime":"2026-02-01T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.764123 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.764177 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.764206 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.764232 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.764255 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:36Z","lastTransitionTime":"2026-02-01T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.769480 4835 generic.go:334] "Generic (PLEG): container finished" podID="00cf5926-f943-44c0-a351-db83ab17c2a1" containerID="ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f" exitCode=0 Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.769687 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" event={"ID":"00cf5926-f943-44c0-a351-db83ab17c2a1","Type":"ContainerDied","Data":"ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f"} Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.774736 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-l7rwg" event={"ID":"96856bc5-b4b0-4268-8868-65a584408ca7","Type":"ContainerStarted","Data":"1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea"} Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.794795 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.821362 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.848069 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.867584 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.871169 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.871199 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.871212 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.871231 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.871242 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:36Z","lastTransitionTime":"2026-02-01T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.887549 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.909845 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.934210 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.951747 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.971344 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.974496 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.974544 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.974565 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.974589 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.974606 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:36Z","lastTransitionTime":"2026-02-01T07:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:36 crc kubenswrapper[4835]: I0201 07:22:36.986142 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.001482 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:36Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.021004 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.037967 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.064114 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.078130 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.078187 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.078215 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.078250 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.078272 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:37Z","lastTransitionTime":"2026-02-01T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.083944 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.102396 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.126167 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.142467 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.160546 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.179293 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.181628 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.181674 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.181691 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.181717 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.181733 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:37Z","lastTransitionTime":"2026-02-01T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.198540 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.214493 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.246030 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.285688 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.285966 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.286099 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.286283 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.286401 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:37Z","lastTransitionTime":"2026-02-01T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.286820 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.324794 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.329596 4835 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.389301 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.389364 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.389381 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.389406 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.389451 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:37Z","lastTransitionTime":"2026-02-01T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.495035 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.495097 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.495115 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.495143 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.495164 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:37Z","lastTransitionTime":"2026-02-01T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.509598 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 10:59:58.901672431 +0000 UTC Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.566206 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.566447 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:37 crc kubenswrapper[4835]: E0201 07:22:37.566504 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:37 crc kubenswrapper[4835]: E0201 07:22:37.566834 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.598317 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.598376 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.598391 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.598430 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.598450 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:37Z","lastTransitionTime":"2026-02-01T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.700909 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.700969 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.700987 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.701014 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.701035 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:37Z","lastTransitionTime":"2026-02-01T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.785471 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerStarted","Data":"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227"} Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.789822 4835 generic.go:334] "Generic (PLEG): container finished" podID="00cf5926-f943-44c0-a351-db83ab17c2a1" containerID="7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6" exitCode=0 Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.789931 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" event={"ID":"00cf5926-f943-44c0-a351-db83ab17c2a1","Type":"ContainerDied","Data":"7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6"} Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.803478 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.803521 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.803532 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.803551 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.803567 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:37Z","lastTransitionTime":"2026-02-01T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.906178 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.906250 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.906275 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.906308 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:37 crc kubenswrapper[4835]: I0201 07:22:37.906346 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:37Z","lastTransitionTime":"2026-02-01T07:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.010619 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.010659 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.010667 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.010682 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.010691 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:38Z","lastTransitionTime":"2026-02-01T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.113019 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.113064 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.113075 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.113090 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.113101 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:38Z","lastTransitionTime":"2026-02-01T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.216490 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.216551 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.216568 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.216601 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.216619 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:38Z","lastTransitionTime":"2026-02-01T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.319430 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.319478 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.319498 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.319520 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.319535 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:38Z","lastTransitionTime":"2026-02-01T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.342891 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.361563 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.373004 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.384304 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.405785 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.425671 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.426054 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.426067 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.426087 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.426101 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:38Z","lastTransitionTime":"2026-02-01T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.431734 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.454305 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.474814 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.494184 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.507985 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.510033 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:22:23.488858028 +0000 UTC Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.527852 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.527883 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.527895 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.527913 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.527926 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:38Z","lastTransitionTime":"2026-02-01T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.530593 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.544710 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.562174 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.565751 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:38 crc kubenswrapper[4835]: E0201 07:22:38.565868 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.575933 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.593079 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.618955 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.630878 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.630927 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.630939 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.630956 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.630969 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:38Z","lastTransitionTime":"2026-02-01T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.633320 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.657678 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.677228 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.695210 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.712899 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.727689 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.736093 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.736160 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.736183 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.736280 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.736311 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:38Z","lastTransitionTime":"2026-02-01T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.740147 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.759318 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.777142 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.797354 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.800185 4835 generic.go:334] "Generic (PLEG): container finished" podID="00cf5926-f943-44c0-a351-db83ab17c2a1" containerID="1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341" exitCode=0 Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.800252 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" event={"ID":"00cf5926-f943-44c0-a351-db83ab17c2a1","Type":"ContainerDied","Data":"1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341"} Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.815537 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.839515 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.840046 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.840077 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.840086 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.840110 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.840121 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:38Z","lastTransitionTime":"2026-02-01T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.860850 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.897033 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.913335 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.932144 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.943572 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.943613 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.943622 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.943636 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.943645 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:38Z","lastTransitionTime":"2026-02-01T07:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.950585 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.970275 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:38 crc kubenswrapper[4835]: I0201 07:22:38.993994 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:38Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.025594 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.046535 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.047543 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.047611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.047634 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.047692 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.047718 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:39Z","lastTransitionTime":"2026-02-01T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.065257 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.082377 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.115936 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.134167 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.151094 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.151176 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.151200 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.151234 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.151256 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:39Z","lastTransitionTime":"2026-02-01T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.152912 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.169970 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.231936 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.248080 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.253501 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.253535 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.253546 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.253563 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.253576 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:39Z","lastTransitionTime":"2026-02-01T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.356699 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.356755 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.356774 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.356798 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.356816 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:39Z","lastTransitionTime":"2026-02-01T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.459964 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.460026 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.460043 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.460073 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.460091 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:39Z","lastTransitionTime":"2026-02-01T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.510788 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 06:35:22.297920587 +0000 UTC Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.562694 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.562743 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.562756 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.562774 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.562787 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:39Z","lastTransitionTime":"2026-02-01T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.566298 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:39 crc kubenswrapper[4835]: E0201 07:22:39.566500 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.566562 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:39 crc kubenswrapper[4835]: E0201 07:22:39.566730 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.665733 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.665810 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.665826 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.665855 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.665874 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:39Z","lastTransitionTime":"2026-02-01T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.770055 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.770436 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.770454 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.770478 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.770496 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:39Z","lastTransitionTime":"2026-02-01T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.808498 4835 generic.go:334] "Generic (PLEG): container finished" podID="00cf5926-f943-44c0-a351-db83ab17c2a1" containerID="8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85" exitCode=0 Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.808573 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" event={"ID":"00cf5926-f943-44c0-a351-db83ab17c2a1","Type":"ContainerDied","Data":"8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85"} Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.836283 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.859316 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.874593 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.874669 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.874692 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.874723 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.874744 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:39Z","lastTransitionTime":"2026-02-01T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.883067 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.905763 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.923887 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.938814 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.951854 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.973273 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.983619 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.983666 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.983693 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.983723 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.983742 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:39Z","lastTransitionTime":"2026-02-01T07:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:39 crc kubenswrapper[4835]: I0201 07:22:39.991734 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:39Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.007920 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:40Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.026469 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:40Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.054717 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:40Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.068448 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:40Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.087301 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:40Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.088753 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.088772 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.088780 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.088793 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.088802 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:40Z","lastTransitionTime":"2026-02-01T07:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.191564 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.191618 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.191641 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.191672 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.191698 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:40Z","lastTransitionTime":"2026-02-01T07:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.293782 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.293832 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.293848 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.293870 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.293884 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:40Z","lastTransitionTime":"2026-02-01T07:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.396395 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.396503 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.396526 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.396556 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.396578 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:40Z","lastTransitionTime":"2026-02-01T07:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.499957 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.500031 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.500055 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.500083 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.500102 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:40Z","lastTransitionTime":"2026-02-01T07:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.511771 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 02:05:54.897002703 +0000 UTC Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.566670 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:40 crc kubenswrapper[4835]: E0201 07:22:40.566897 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.603259 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.603326 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.603344 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.603598 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.603646 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:40Z","lastTransitionTime":"2026-02-01T07:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.706855 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.706921 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.706939 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.706963 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.706982 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:40Z","lastTransitionTime":"2026-02-01T07:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.810255 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.810311 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.810327 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.810349 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.810366 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:40Z","lastTransitionTime":"2026-02-01T07:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.818753 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerStarted","Data":"eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34"} Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.819014 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.826140 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" event={"ID":"00cf5926-f943-44c0-a351-db83ab17c2a1","Type":"ContainerStarted","Data":"9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d"} Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.854780 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:40Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.887124 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.896243 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:40Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.914483 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.914539 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.914555 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.914582 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.914601 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:40Z","lastTransitionTime":"2026-02-01T07:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.916899 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:40Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.938847 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:40Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.960933 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:40Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:40 crc kubenswrapper[4835]: I0201 07:22:40.985535 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:40Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.008083 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.017058 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.017152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.017171 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.017196 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.017215 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:41Z","lastTransitionTime":"2026-02-01T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.028046 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.048705 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.067085 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.090293 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.112952 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.119907 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.119948 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.119964 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.119986 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.120003 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:41Z","lastTransitionTime":"2026-02-01T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.134977 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.152540 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.175516 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.194960 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.213049 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.223544 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.223603 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.223620 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.223647 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.223672 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:41Z","lastTransitionTime":"2026-02-01T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.246052 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.264054 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.282983 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.302387 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.323338 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.326533 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.326586 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.326603 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.326633 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.326650 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:41Z","lastTransitionTime":"2026-02-01T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.354961 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.380682 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.403308 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.422789 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.428869 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.428930 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.428947 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.428973 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.428990 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:41Z","lastTransitionTime":"2026-02-01T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.439855 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.460171 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.512862 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 00:20:08.975271133 +0000 UTC Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.532647 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.532708 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.532726 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.532753 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.532771 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:41Z","lastTransitionTime":"2026-02-01T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.566311 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:41 crc kubenswrapper[4835]: E0201 07:22:41.566518 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.566631 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:41 crc kubenswrapper[4835]: E0201 07:22:41.566824 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.635652 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.635723 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.635740 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.635765 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.635782 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:41Z","lastTransitionTime":"2026-02-01T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.739486 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.739554 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.739594 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.739620 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.739638 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:41Z","lastTransitionTime":"2026-02-01T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.830551 4835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.831219 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.842605 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.842663 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.842688 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.842715 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.842738 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:41Z","lastTransitionTime":"2026-02-01T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.866903 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.882832 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.900495 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.915872 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.934800 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.945377 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.945497 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.945517 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.945541 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.945560 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:41Z","lastTransitionTime":"2026-02-01T07:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.958384 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.979270 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:41 crc kubenswrapper[4835]: I0201 07:22:41.998712 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:41Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.035371 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:42Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.048661 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.048716 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.048730 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.048751 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.048766 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:42Z","lastTransitionTime":"2026-02-01T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.056108 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:42Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.081831 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:42Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.105529 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:42Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.121245 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:42Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.154980 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:42Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.181645 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.181692 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.181704 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.181727 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.181739 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:42Z","lastTransitionTime":"2026-02-01T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.195873 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:42Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.284230 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.284314 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.284339 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.284371 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.284396 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:42Z","lastTransitionTime":"2026-02-01T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.387835 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.387892 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.387909 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.387933 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.387953 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:42Z","lastTransitionTime":"2026-02-01T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.492941 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.492987 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.492996 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.493010 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.493021 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:42Z","lastTransitionTime":"2026-02-01T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.513265 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 12:17:44.327435246 +0000 UTC Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.566289 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:42 crc kubenswrapper[4835]: E0201 07:22:42.566623 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.596761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.596820 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.596837 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.596860 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.596879 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:42Z","lastTransitionTime":"2026-02-01T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.700142 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.700193 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.700206 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.700229 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.700247 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:42Z","lastTransitionTime":"2026-02-01T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.803444 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.803503 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.803523 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.803544 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.803559 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:42Z","lastTransitionTime":"2026-02-01T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.834116 4835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.906380 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.906445 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.906458 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.906478 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:42 crc kubenswrapper[4835]: I0201 07:22:42.906496 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:42Z","lastTransitionTime":"2026-02-01T07:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.009675 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.009740 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.009764 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.009796 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.009818 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.112888 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.112960 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.112995 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.113024 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.113044 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.216679 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.216727 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.216738 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.216758 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.216771 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.275819 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.275993 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:22:59.275970727 +0000 UTC m=+52.396407171 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.323807 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.323901 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.323923 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.323949 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.324149 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.376971 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.377022 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.377048 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.377082 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.377175 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.377200 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.377220 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.377231 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.377252 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:59.377231464 +0000 UTC m=+52.497667908 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.377279 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:59.377267034 +0000 UTC m=+52.497703588 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.377291 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.377330 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:59.377312386 +0000 UTC m=+52.497748930 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.377383 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.377393 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.377401 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.377448 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-01 07:22:59.377440739 +0000 UTC m=+52.497877263 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.412097 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.412178 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.412193 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.412214 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.412228 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.425699 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:43Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.429549 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.429582 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.429593 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.429608 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.429621 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.442655 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:43Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.450622 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.450814 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.450835 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.450886 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.450904 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.469477 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:43Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.473686 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.473740 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.473754 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.473772 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.473782 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.486668 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:43Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.492618 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.492786 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.492833 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.492869 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.492894 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.506666 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:43Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.506814 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.508356 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.508382 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.508393 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.508442 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.508459 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.513842 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 19:22:58.689958792 +0000 UTC Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.566579 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.566606 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.566789 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:43 crc kubenswrapper[4835]: E0201 07:22:43.566900 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.611795 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.611871 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.611894 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.611919 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.611936 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.714332 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.714374 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.714384 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.714402 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.714429 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.817316 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.817376 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.817396 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.817445 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.817463 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.840082 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/0.log" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.844157 4835 generic.go:334] "Generic (PLEG): container finished" podID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerID="eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34" exitCode=1 Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.844205 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34"} Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.845295 4835 scope.go:117] "RemoveContainer" containerID="eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.870917 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:43Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.891597 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:43Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.908969 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:43Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.920488 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.920516 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.920525 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.920539 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.920548 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:43Z","lastTransitionTime":"2026-02-01T07:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.924028 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:43Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.950011 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:42Z\\\",\\\"message\\\":\\\"hift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:42.866826 6156 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:42.866851 6156 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 07:22:42.866869 6156 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0201 07:22:42.866878 6156 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0201 07:22:42.866911 6156 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:42.866931 6156 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0201 07:22:42.866943 6156 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0201 07:22:42.867442 6156 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:42.867480 6156 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0201 07:22:42.867489 6156 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0201 07:22:42.867525 6156 factory.go:656] Stopping watch factory\\\\nI0201 07:22:42.867542 6156 ovnkube.go:599] Stopped ovnkube\\\\nI0201 07:22:42.867569 6156 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:42.867595 6156 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0201 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:43Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.962748 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:43Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.979847 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:43Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:43 crc kubenswrapper[4835]: I0201 07:22:43.997240 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:43Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.016352 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.023292 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.023344 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.023360 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.023383 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.023399 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:44Z","lastTransitionTime":"2026-02-01T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.033492 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.046832 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.062596 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.078441 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.087780 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.125326 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.125356 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.125364 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.125376 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.125384 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:44Z","lastTransitionTime":"2026-02-01T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.228021 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.228068 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.228086 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.228105 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.228117 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:44Z","lastTransitionTime":"2026-02-01T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.331301 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.331374 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.331398 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.331472 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.331498 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:44Z","lastTransitionTime":"2026-02-01T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.434735 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.434784 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.434801 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.434823 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.434840 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:44Z","lastTransitionTime":"2026-02-01T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.514250 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 16:40:49.405608474 +0000 UTC Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.537080 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.537121 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.537134 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.537150 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.537163 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:44Z","lastTransitionTime":"2026-02-01T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.566756 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:44 crc kubenswrapper[4835]: E0201 07:22:44.566888 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.640189 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.640231 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.640242 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.640259 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.640271 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:44Z","lastTransitionTime":"2026-02-01T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.742858 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.742894 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.742902 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.742916 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.742926 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:44Z","lastTransitionTime":"2026-02-01T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.845147 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.845188 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.845198 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.845213 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.845224 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:44Z","lastTransitionTime":"2026-02-01T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.848085 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/0.log" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.850258 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerStarted","Data":"31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732"} Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.850369 4835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.867840 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.881771 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.892516 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.907148 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.920983 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.932935 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.943935 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.947561 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.947605 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.947617 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.947634 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.947646 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:44Z","lastTransitionTime":"2026-02-01T07:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.962557 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:42Z\\\",\\\"message\\\":\\\"hift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:42.866826 6156 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:42.866851 6156 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 07:22:42.866869 6156 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0201 07:22:42.866878 6156 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0201 07:22:42.866911 6156 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:42.866931 6156 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0201 07:22:42.866943 6156 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0201 07:22:42.867442 6156 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:42.867480 6156 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0201 07:22:42.867489 6156 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0201 07:22:42.867525 6156 factory.go:656] Stopping watch factory\\\\nI0201 07:22:42.867542 6156 ovnkube.go:599] Stopped ovnkube\\\\nI0201 07:22:42.867569 6156 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:42.867595 6156 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0201 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.972268 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.982698 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:44 crc kubenswrapper[4835]: I0201 07:22:44.992560 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:44Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.007472 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:45Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.019716 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:45Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.034244 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:45Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.050777 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.050828 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.050845 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.050867 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.050884 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:45Z","lastTransitionTime":"2026-02-01T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.153519 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.153550 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.153562 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.153577 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.153588 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:45Z","lastTransitionTime":"2026-02-01T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.255994 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.256066 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.256090 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.256121 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.256143 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:45Z","lastTransitionTime":"2026-02-01T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.359467 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.359536 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.359559 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.359590 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.359611 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:45Z","lastTransitionTime":"2026-02-01T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.462434 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.462472 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.462483 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.462498 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.462509 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:45Z","lastTransitionTime":"2026-02-01T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.515007 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 08:04:28.554137184 +0000 UTC Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.565767 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.565753 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:45 crc kubenswrapper[4835]: E0201 07:22:45.565920 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.565885 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:45 crc kubenswrapper[4835]: E0201 07:22:45.566039 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.566065 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.566108 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.566137 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.566160 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:45Z","lastTransitionTime":"2026-02-01T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.668899 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.668953 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.668971 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.668994 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.669016 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:45Z","lastTransitionTime":"2026-02-01T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.772889 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.772967 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.772991 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.773023 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.773046 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:45Z","lastTransitionTime":"2026-02-01T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.856371 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/1.log" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.857255 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/0.log" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.862859 4835 generic.go:334] "Generic (PLEG): container finished" podID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerID="31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732" exitCode=1 Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.862916 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732"} Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.862983 4835 scope.go:117] "RemoveContainer" containerID="eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.864170 4835 scope.go:117] "RemoveContainer" containerID="31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732" Feb 01 07:22:45 crc kubenswrapper[4835]: E0201 07:22:45.864578 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.876808 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.876855 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.876874 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.876926 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.876944 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:45Z","lastTransitionTime":"2026-02-01T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.887206 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:45Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.907858 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:45Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.928884 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:45Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.953061 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:45Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.972278 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:45Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.979922 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.979969 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.979985 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.980008 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.980028 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:45Z","lastTransitionTime":"2026-02-01T07:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:45 crc kubenswrapper[4835]: I0201 07:22:45.991052 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:45Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.009629 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.025146 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.042572 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.063472 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.076161 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf"] Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.076923 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.081738 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.081886 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.082752 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.082808 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.082833 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.082864 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.082889 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:46Z","lastTransitionTime":"2026-02-01T07:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.085090 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.106493 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.129799 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:42Z\\\",\\\"message\\\":\\\"hift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:42.866826 6156 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:42.866851 6156 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 07:22:42.866869 6156 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0201 07:22:42.866878 6156 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0201 07:22:42.866911 6156 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:42.866931 6156 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0201 07:22:42.866943 6156 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0201 07:22:42.867442 6156 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:42.867480 6156 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0201 07:22:42.867489 6156 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0201 07:22:42.867525 6156 factory.go:656] Stopping watch factory\\\\nI0201 07:22:42.867542 6156 ovnkube.go:599] Stopped ovnkube\\\\nI0201 07:22:42.867569 6156 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:42.867595 6156 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0201 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:44Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.911996 6313 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912070 6313 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912661 6313 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:44.912696 6313 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.912825 6313 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.914511 6313 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:44.914576 6313 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:44.914655 6313 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:44.914663 6313 factory.go:656] Stopping watch factory\\\\nI0201 07:22:44.914682 6313 ovnkube.go:599] Stopped ovnkube\\\\nI0201 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.141944 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.155176 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.155953 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.170272 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.186633 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.186701 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.186722 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.186748 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.186772 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:46Z","lastTransitionTime":"2026-02-01T07:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.190254 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.207686 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.208509 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/97c5a8c8-51ec-4c9b-9334-1c059fce5ee2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7r4zf\" (UID: \"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.208614 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/97c5a8c8-51ec-4c9b-9334-1c059fce5ee2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7r4zf\" (UID: \"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.208761 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/97c5a8c8-51ec-4c9b-9334-1c059fce5ee2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7r4zf\" (UID: \"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.208883 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6kkn\" (UniqueName: \"kubernetes.io/projected/97c5a8c8-51ec-4c9b-9334-1c059fce5ee2-kube-api-access-k6kkn\") pod \"ovnkube-control-plane-749d76644c-7r4zf\" (UID: \"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.227603 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.257667 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:42Z\\\",\\\"message\\\":\\\"hift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:42.866826 6156 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:42.866851 6156 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 07:22:42.866869 6156 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0201 07:22:42.866878 6156 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0201 07:22:42.866911 6156 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:42.866931 6156 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0201 07:22:42.866943 6156 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0201 07:22:42.867442 6156 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:42.867480 6156 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0201 07:22:42.867489 6156 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0201 07:22:42.867525 6156 factory.go:656] Stopping watch factory\\\\nI0201 07:22:42.867542 6156 ovnkube.go:599] Stopped ovnkube\\\\nI0201 07:22:42.867569 6156 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:42.867595 6156 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0201 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:44Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.911996 6313 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912070 6313 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912661 6313 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:44.912696 6313 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.912825 6313 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.914511 6313 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:44.914576 6313 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:44.914655 6313 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:44.914663 6313 factory.go:656] Stopping watch factory\\\\nI0201 07:22:44.914682 6313 ovnkube.go:599] Stopped ovnkube\\\\nI0201 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.275311 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.290155 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.290210 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.290227 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.290252 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.290273 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:46Z","lastTransitionTime":"2026-02-01T07:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.298513 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.310294 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/97c5a8c8-51ec-4c9b-9334-1c059fce5ee2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7r4zf\" (UID: \"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.310364 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/97c5a8c8-51ec-4c9b-9334-1c059fce5ee2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7r4zf\" (UID: \"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.310455 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6kkn\" (UniqueName: \"kubernetes.io/projected/97c5a8c8-51ec-4c9b-9334-1c059fce5ee2-kube-api-access-k6kkn\") pod \"ovnkube-control-plane-749d76644c-7r4zf\" (UID: \"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.310507 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/97c5a8c8-51ec-4c9b-9334-1c059fce5ee2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7r4zf\" (UID: \"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.311713 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/97c5a8c8-51ec-4c9b-9334-1c059fce5ee2-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7r4zf\" (UID: \"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.311871 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/97c5a8c8-51ec-4c9b-9334-1c059fce5ee2-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7r4zf\" (UID: \"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.319467 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.319905 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.320329 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/97c5a8c8-51ec-4c9b-9334-1c059fce5ee2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7r4zf\" (UID: \"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.339465 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.344396 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6kkn\" (UniqueName: \"kubernetes.io/projected/97c5a8c8-51ec-4c9b-9334-1c059fce5ee2-kube-api-access-k6kkn\") pod \"ovnkube-control-plane-749d76644c-7r4zf\" (UID: \"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.360227 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.377345 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.393731 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.393835 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.393858 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.393887 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.393913 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:46Z","lastTransitionTime":"2026-02-01T07:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.400233 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.400301 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.423644 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: W0201 07:22:46.425209 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97c5a8c8_51ec_4c9b_9334_1c059fce5ee2.slice/crio-2da9a37effcdf926ab08b87e3ef5413f100192fd04ba939bcbc7e980d128e702 WatchSource:0}: Error finding container 2da9a37effcdf926ab08b87e3ef5413f100192fd04ba939bcbc7e980d128e702: Status 404 returned error can't find the container with id 2da9a37effcdf926ab08b87e3ef5413f100192fd04ba939bcbc7e980d128e702 Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.444320 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.464862 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.485931 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.497531 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.497606 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.497629 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.497661 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.497686 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:46Z","lastTransitionTime":"2026-02-01T07:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.515126 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 21:34:52.9349636 +0000 UTC Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.516390 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eae2f00645693ff6529ffa70014830776fcd76e7ecf63e2d238327abea5dcd34\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:42Z\\\",\\\"message\\\":\\\"hift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:42.866826 6156 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:42.866851 6156 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 07:22:42.866869 6156 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0201 07:22:42.866878 6156 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0201 07:22:42.866911 6156 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:42.866931 6156 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0201 07:22:42.866943 6156 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0201 07:22:42.867442 6156 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:42.867480 6156 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0201 07:22:42.867489 6156 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0201 07:22:42.867525 6156 factory.go:656] Stopping watch factory\\\\nI0201 07:22:42.867542 6156 ovnkube.go:599] Stopped ovnkube\\\\nI0201 07:22:42.867569 6156 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:42.867595 6156 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0201 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:44Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.911996 6313 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912070 6313 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912661 6313 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:44.912696 6313 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.912825 6313 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.914511 6313 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:44.914576 6313 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:44.914655 6313 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:44.914663 6313 factory.go:656] Stopping watch factory\\\\nI0201 07:22:44.914682 6313 ovnkube.go:599] Stopped ovnkube\\\\nI0201 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.532345 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.552375 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.566801 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:46 crc kubenswrapper[4835]: E0201 07:22:46.566968 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.570248 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.587105 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.600457 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.600515 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.600533 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.600557 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.600577 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:46Z","lastTransitionTime":"2026-02-01T07:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.606510 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.620378 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.640312 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.662172 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.679375 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.697583 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.702772 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.702809 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.702825 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.702848 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.702865 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:46Z","lastTransitionTime":"2026-02-01T07:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.716194 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.764548 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.805207 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.805263 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.805283 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.805304 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.805319 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:46Z","lastTransitionTime":"2026-02-01T07:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.869628 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/1.log" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.876772 4835 scope.go:117] "RemoveContainer" containerID="31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732" Feb 01 07:22:46 crc kubenswrapper[4835]: E0201 07:22:46.877046 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.877642 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" event={"ID":"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2","Type":"ContainerStarted","Data":"9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974"} Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.877701 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" event={"ID":"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2","Type":"ContainerStarted","Data":"bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c"} Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.877714 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" event={"ID":"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2","Type":"ContainerStarted","Data":"2da9a37effcdf926ab08b87e3ef5413f100192fd04ba939bcbc7e980d128e702"} Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.898043 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.909564 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.909630 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.909654 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.909684 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.909705 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:46Z","lastTransitionTime":"2026-02-01T07:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.915863 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.940191 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.974047 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:44Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.911996 6313 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912070 6313 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912661 6313 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:44.912696 6313 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.912825 6313 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.914511 6313 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:44.914576 6313 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:44.914655 6313 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:44.914663 6313 factory.go:656] Stopping watch factory\\\\nI0201 07:22:44.914682 6313 ovnkube.go:599] Stopped ovnkube\\\\nI0201 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:46 crc kubenswrapper[4835]: I0201 07:22:46.989879 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:46Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.005606 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.012473 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.012546 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.012563 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.012587 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.012605 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:47Z","lastTransitionTime":"2026-02-01T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.021602 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.033869 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.053201 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.072616 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.093764 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.108756 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.116235 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.116277 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.116292 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.116320 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.116337 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:47Z","lastTransitionTime":"2026-02-01T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.125508 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.142551 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.161479 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.173855 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.190586 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.208331 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.219853 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.219919 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.219936 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.219961 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.219983 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:47Z","lastTransitionTime":"2026-02-01T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.243559 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:44Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.911996 6313 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912070 6313 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912661 6313 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:44.912696 6313 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.912825 6313 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.914511 6313 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:44.914576 6313 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:44.914655 6313 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:44.914663 6313 factory.go:656] Stopping watch factory\\\\nI0201 07:22:44.914682 6313 ovnkube.go:599] Stopped ovnkube\\\\nI0201 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.264327 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.285901 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.299769 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.315003 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.322154 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.322218 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.322243 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.322270 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.322289 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:47Z","lastTransitionTime":"2026-02-01T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.329706 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.341082 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.358131 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.372447 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.385908 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.398481 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.412034 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.424980 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.425025 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.425037 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.425053 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.425065 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:47Z","lastTransitionTime":"2026-02-01T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.515273 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 00:49:56.044083909 +0000 UTC Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.527447 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.527504 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.527520 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.527548 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.527565 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:47Z","lastTransitionTime":"2026-02-01T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.566163 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:47 crc kubenswrapper[4835]: E0201 07:22:47.566335 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.566407 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:47 crc kubenswrapper[4835]: E0201 07:22:47.566684 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.590404 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.599363 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-2msm5"] Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.600112 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:47 crc kubenswrapper[4835]: E0201 07:22:47.600207 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.610914 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.631016 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.631066 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.631083 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.631106 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.631123 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:47Z","lastTransitionTime":"2026-02-01T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.632985 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.666258 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.686952 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.706046 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.727236 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.727293 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tthdk\" (UniqueName: \"kubernetes.io/projected/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-kube-api-access-tthdk\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.730952 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.734692 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.734757 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.734781 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.734814 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.734838 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:47Z","lastTransitionTime":"2026-02-01T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.745539 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.763044 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.787751 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.810447 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.827848 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.828100 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:47 crc kubenswrapper[4835]: E0201 07:22:47.828230 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.828480 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tthdk\" (UniqueName: \"kubernetes.io/projected/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-kube-api-access-tthdk\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:47 crc kubenswrapper[4835]: E0201 07:22:47.828531 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs podName:caf346fd-1c47-4f35-a5e6-79f7ac8fcafe nodeName:}" failed. No retries permitted until 2026-02-01 07:22:48.328499397 +0000 UTC m=+41.448935861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs") pod "network-metrics-daemon-2msm5" (UID: "caf346fd-1c47-4f35-a5e6-79f7ac8fcafe") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.840149 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.840201 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.840217 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.840244 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.840262 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:47Z","lastTransitionTime":"2026-02-01T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.853167 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:44Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.911996 6313 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912070 6313 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912661 6313 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:44.912696 6313 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.912825 6313 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.914511 6313 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:44.914576 6313 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:44.914655 6313 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:44.914663 6313 factory.go:656] Stopping watch factory\\\\nI0201 07:22:44.914682 6313 ovnkube.go:599] Stopped ovnkube\\\\nI0201 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.860854 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tthdk\" (UniqueName: \"kubernetes.io/projected/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-kube-api-access-tthdk\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.867886 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.882667 4835 scope.go:117] "RemoveContainer" containerID="31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732" Feb 01 07:22:47 crc kubenswrapper[4835]: E0201 07:22:47.882919 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.916148 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.930595 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.943350 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.943400 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.943445 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.943470 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.943487 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:47Z","lastTransitionTime":"2026-02-01T07:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.948167 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.966214 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:47 crc kubenswrapper[4835]: I0201 07:22:47.995737 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:44Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.911996 6313 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912070 6313 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912661 6313 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:44.912696 6313 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.912825 6313 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.914511 6313 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:44.914576 6313 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:44.914655 6313 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:44.914663 6313 factory.go:656] Stopping watch factory\\\\nI0201 07:22:44.914682 6313 ovnkube.go:599] Stopped ovnkube\\\\nI0201 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:47Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.015576 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:48Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.040872 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:48Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.046729 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.046784 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.046802 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.046828 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.046846 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:48Z","lastTransitionTime":"2026-02-01T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.061889 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:48Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.081484 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:48Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.099219 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:48Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.117773 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:48Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.136319 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:48Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.151012 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.151099 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.151119 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.151151 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.151176 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:48Z","lastTransitionTime":"2026-02-01T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.160602 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:48Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.180893 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:48Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.206096 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:48Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.226855 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:48Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.245225 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:48Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.254398 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.254498 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.254519 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.254545 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.254565 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:48Z","lastTransitionTime":"2026-02-01T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.338578 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:48 crc kubenswrapper[4835]: E0201 07:22:48.338760 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:22:48 crc kubenswrapper[4835]: E0201 07:22:48.338842 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs podName:caf346fd-1c47-4f35-a5e6-79f7ac8fcafe nodeName:}" failed. No retries permitted until 2026-02-01 07:22:49.338820203 +0000 UTC m=+42.459256667 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs") pod "network-metrics-daemon-2msm5" (UID: "caf346fd-1c47-4f35-a5e6-79f7ac8fcafe") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.356741 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.356792 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.356807 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.356832 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.356850 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:48Z","lastTransitionTime":"2026-02-01T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.464812 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.464878 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.464898 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.464926 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.464947 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:48Z","lastTransitionTime":"2026-02-01T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.515583 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 06:40:21.51002953 +0000 UTC Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.566186 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:48 crc kubenswrapper[4835]: E0201 07:22:48.566353 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.568391 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.568465 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.568482 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.568502 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.568520 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:48Z","lastTransitionTime":"2026-02-01T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.671161 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.671220 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.671239 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.671264 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.671284 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:48Z","lastTransitionTime":"2026-02-01T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.774461 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.774511 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.774528 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.774550 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.774568 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:48Z","lastTransitionTime":"2026-02-01T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.877264 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.877310 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.877323 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.877338 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.877349 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:48Z","lastTransitionTime":"2026-02-01T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.981155 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.981482 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.981498 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.981515 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:48 crc kubenswrapper[4835]: I0201 07:22:48.981526 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:48Z","lastTransitionTime":"2026-02-01T07:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.084398 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.084496 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.084516 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.084542 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.084560 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:49Z","lastTransitionTime":"2026-02-01T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.187248 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.187334 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.187362 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.187388 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.187442 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:49Z","lastTransitionTime":"2026-02-01T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.292717 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.292770 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.292783 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.292801 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.292814 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:49Z","lastTransitionTime":"2026-02-01T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.349375 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:49 crc kubenswrapper[4835]: E0201 07:22:49.349592 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:22:49 crc kubenswrapper[4835]: E0201 07:22:49.349699 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs podName:caf346fd-1c47-4f35-a5e6-79f7ac8fcafe nodeName:}" failed. No retries permitted until 2026-02-01 07:22:51.3496762 +0000 UTC m=+44.470112644 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs") pod "network-metrics-daemon-2msm5" (UID: "caf346fd-1c47-4f35-a5e6-79f7ac8fcafe") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.395261 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.395323 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.395340 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.395365 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.395385 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:49Z","lastTransitionTime":"2026-02-01T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.497934 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.497996 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.498014 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.498068 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.498098 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:49Z","lastTransitionTime":"2026-02-01T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.516168 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 10:04:05.044000037 +0000 UTC Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.565838 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:49 crc kubenswrapper[4835]: E0201 07:22:49.565969 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.566476 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:49 crc kubenswrapper[4835]: E0201 07:22:49.566558 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.566614 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:49 crc kubenswrapper[4835]: E0201 07:22:49.566676 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.600682 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.600736 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.600758 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.600783 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.600800 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:49Z","lastTransitionTime":"2026-02-01T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.703672 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.703723 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.703740 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.703762 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.703779 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:49Z","lastTransitionTime":"2026-02-01T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.806628 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.806686 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.806706 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.806731 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.806749 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:49Z","lastTransitionTime":"2026-02-01T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.909026 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.909091 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.909109 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.909133 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:49 crc kubenswrapper[4835]: I0201 07:22:49.909152 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:49Z","lastTransitionTime":"2026-02-01T07:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.012404 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.012496 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.012515 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.012541 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.012561 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:50Z","lastTransitionTime":"2026-02-01T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.114768 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.114831 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.114848 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.114873 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.114890 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:50Z","lastTransitionTime":"2026-02-01T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.216966 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.217018 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.217035 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.217055 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.217071 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:50Z","lastTransitionTime":"2026-02-01T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.319674 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.319728 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.319745 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.319772 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.319788 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:50Z","lastTransitionTime":"2026-02-01T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.422846 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.422905 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.422922 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.422948 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.422965 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:50Z","lastTransitionTime":"2026-02-01T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.517076 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 10:44:04.365550946 +0000 UTC Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.525495 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.525707 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.525841 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.525974 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.526141 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:50Z","lastTransitionTime":"2026-02-01T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.566065 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:50 crc kubenswrapper[4835]: E0201 07:22:50.566565 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.629081 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.629142 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.629158 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.629183 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.629200 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:50Z","lastTransitionTime":"2026-02-01T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.731763 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.731829 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.731847 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.731876 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.731894 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:50Z","lastTransitionTime":"2026-02-01T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.835084 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.835161 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.835184 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.835213 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.835235 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:50Z","lastTransitionTime":"2026-02-01T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.938688 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.938781 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.938804 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.938835 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:50 crc kubenswrapper[4835]: I0201 07:22:50.938857 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:50Z","lastTransitionTime":"2026-02-01T07:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.042621 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.042698 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.042720 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.042753 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.042787 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:51Z","lastTransitionTime":"2026-02-01T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.145446 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.145500 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.145517 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.145540 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.145556 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:51Z","lastTransitionTime":"2026-02-01T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.248913 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.248980 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.249000 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.249030 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.249048 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:51Z","lastTransitionTime":"2026-02-01T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.351818 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.351878 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.351894 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.351915 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.351934 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:51Z","lastTransitionTime":"2026-02-01T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.372651 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:51 crc kubenswrapper[4835]: E0201 07:22:51.372865 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:22:51 crc kubenswrapper[4835]: E0201 07:22:51.372939 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs podName:caf346fd-1c47-4f35-a5e6-79f7ac8fcafe nodeName:}" failed. No retries permitted until 2026-02-01 07:22:55.37291799 +0000 UTC m=+48.493354464 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs") pod "network-metrics-daemon-2msm5" (UID: "caf346fd-1c47-4f35-a5e6-79f7ac8fcafe") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.454506 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.454562 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.454579 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.454600 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.454618 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:51Z","lastTransitionTime":"2026-02-01T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.518880 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 08:33:26.083762835 +0000 UTC Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.557770 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.557836 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.557858 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.557887 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.557912 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:51Z","lastTransitionTime":"2026-02-01T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.566173 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.566305 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.566445 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:51 crc kubenswrapper[4835]: E0201 07:22:51.566610 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:51 crc kubenswrapper[4835]: E0201 07:22:51.566835 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:51 crc kubenswrapper[4835]: E0201 07:22:51.567387 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.661342 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.661405 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.661447 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.661472 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.661490 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:51Z","lastTransitionTime":"2026-02-01T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.764392 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.764474 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.764494 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.764519 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.764537 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:51Z","lastTransitionTime":"2026-02-01T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.867470 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.867515 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.867532 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.867560 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.867577 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:51Z","lastTransitionTime":"2026-02-01T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.970209 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.970283 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.970302 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.970327 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:51 crc kubenswrapper[4835]: I0201 07:22:51.970347 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:51Z","lastTransitionTime":"2026-02-01T07:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.073607 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.073694 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.073710 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.073734 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.073754 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:52Z","lastTransitionTime":"2026-02-01T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.177509 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.177576 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.177598 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.177626 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.177648 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:52Z","lastTransitionTime":"2026-02-01T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.281546 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.281615 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.281640 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.281668 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.281689 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:52Z","lastTransitionTime":"2026-02-01T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.384311 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.384378 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.384395 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.384470 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.384532 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:52Z","lastTransitionTime":"2026-02-01T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.487905 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.488212 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.488349 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.488705 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.488878 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:52Z","lastTransitionTime":"2026-02-01T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.519511 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 10:23:41.608144647 +0000 UTC Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.565815 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:52 crc kubenswrapper[4835]: E0201 07:22:52.566008 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.591773 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.591824 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.591841 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.591864 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.591881 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:52Z","lastTransitionTime":"2026-02-01T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.694151 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.694197 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.694213 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.694237 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.694253 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:52Z","lastTransitionTime":"2026-02-01T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.797194 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.797288 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.797348 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.797371 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.797455 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:52Z","lastTransitionTime":"2026-02-01T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.900914 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.901081 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.901113 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.901192 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:52 crc kubenswrapper[4835]: I0201 07:22:52.901221 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:52Z","lastTransitionTime":"2026-02-01T07:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.004215 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.004281 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.004298 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.004322 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.004340 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.107117 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.107162 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.107179 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.107220 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.107241 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.210096 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.210158 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.210175 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.210196 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.210213 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.313289 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.313361 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.313387 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.313455 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.313483 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.417175 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.417258 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.417284 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.417315 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.417337 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.519620 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 14:45:35.682345378 +0000 UTC Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.520500 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.520572 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.520590 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.520619 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.520642 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.565764 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.565810 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.565831 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:53 crc kubenswrapper[4835]: E0201 07:22:53.565980 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:22:53 crc kubenswrapper[4835]: E0201 07:22:53.566180 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:53 crc kubenswrapper[4835]: E0201 07:22:53.566326 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.624633 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.624720 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.624744 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.624778 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.624798 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.727761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.727817 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.727834 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.727858 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.727875 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.734637 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.734688 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.734704 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.734724 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.734739 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: E0201 07:22:53.755068 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:53Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.760265 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.760331 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.760349 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.760373 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.760391 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: E0201 07:22:53.782663 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:53Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.788363 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.788488 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.788517 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.788547 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.788574 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: E0201 07:22:53.810680 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:53Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.815765 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.815808 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.815820 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.815838 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.815850 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: E0201 07:22:53.833700 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:53Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.839483 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.839519 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.839530 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.839546 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.839558 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: E0201 07:22:53.873299 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:53Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:53 crc kubenswrapper[4835]: E0201 07:22:53.873643 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.877290 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.877330 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.877343 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.877362 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.877377 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.980754 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.980822 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.980840 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.980870 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:53 crc kubenswrapper[4835]: I0201 07:22:53.980889 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:53Z","lastTransitionTime":"2026-02-01T07:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.084113 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.084185 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.084203 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.084229 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.084251 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:54Z","lastTransitionTime":"2026-02-01T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.187054 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.187106 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.187124 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.187147 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.187163 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:54Z","lastTransitionTime":"2026-02-01T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.290325 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.290395 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.290453 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.290486 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.290505 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:54Z","lastTransitionTime":"2026-02-01T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.393166 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.393239 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.393262 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.393297 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.393321 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:54Z","lastTransitionTime":"2026-02-01T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.496084 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.496153 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.496178 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.496209 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.496231 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:54Z","lastTransitionTime":"2026-02-01T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.519991 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:28:14.437561785 +0000 UTC Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.566787 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:54 crc kubenswrapper[4835]: E0201 07:22:54.567003 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.599260 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.599342 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.599364 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.599390 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.599441 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:54Z","lastTransitionTime":"2026-02-01T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.702130 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.702195 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.702217 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.702242 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.702259 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:54Z","lastTransitionTime":"2026-02-01T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.805839 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.805908 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.805928 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.805953 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.805978 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:54Z","lastTransitionTime":"2026-02-01T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.909212 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.909262 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.909278 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.909300 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:54 crc kubenswrapper[4835]: I0201 07:22:54.909317 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:54Z","lastTransitionTime":"2026-02-01T07:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.012694 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.012761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.012783 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.012811 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.012832 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:55Z","lastTransitionTime":"2026-02-01T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.115188 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.115260 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.115280 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.115305 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.115322 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:55Z","lastTransitionTime":"2026-02-01T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.218635 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.218702 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.218722 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.218751 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.218769 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:55Z","lastTransitionTime":"2026-02-01T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.321168 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.321235 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.321251 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.321276 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.321293 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:55Z","lastTransitionTime":"2026-02-01T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.424761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.424819 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.424836 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.424860 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.424878 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:55Z","lastTransitionTime":"2026-02-01T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.425057 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:55 crc kubenswrapper[4835]: E0201 07:22:55.425247 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:22:55 crc kubenswrapper[4835]: E0201 07:22:55.425339 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs podName:caf346fd-1c47-4f35-a5e6-79f7ac8fcafe nodeName:}" failed. No retries permitted until 2026-02-01 07:23:03.425314652 +0000 UTC m=+56.545751126 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs") pod "network-metrics-daemon-2msm5" (UID: "caf346fd-1c47-4f35-a5e6-79f7ac8fcafe") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.520405 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:58:04.603734667 +0000 UTC Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.527822 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.527885 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.527902 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.527927 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.527945 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:55Z","lastTransitionTime":"2026-02-01T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.566707 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.566786 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.566959 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:55 crc kubenswrapper[4835]: E0201 07:22:55.567086 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:22:55 crc kubenswrapper[4835]: E0201 07:22:55.567205 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:55 crc kubenswrapper[4835]: E0201 07:22:55.567311 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.630582 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.630624 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.630635 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.630651 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.630662 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:55Z","lastTransitionTime":"2026-02-01T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.734456 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.734521 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.734541 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.734564 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.734581 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:55Z","lastTransitionTime":"2026-02-01T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.837098 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.837176 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.837198 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.837227 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.837249 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:55Z","lastTransitionTime":"2026-02-01T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.940526 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.940596 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.940615 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.940641 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:55 crc kubenswrapper[4835]: I0201 07:22:55.940660 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:55Z","lastTransitionTime":"2026-02-01T07:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.043292 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.043351 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.043368 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.043392 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.043444 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:56Z","lastTransitionTime":"2026-02-01T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.146349 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.146404 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.146464 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.146489 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.146506 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:56Z","lastTransitionTime":"2026-02-01T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.249338 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.249384 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.249400 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.249528 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.249547 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:56Z","lastTransitionTime":"2026-02-01T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.352533 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.352607 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.352639 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.352668 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.352690 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:56Z","lastTransitionTime":"2026-02-01T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.455967 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.456023 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.456042 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.456064 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.456080 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:56Z","lastTransitionTime":"2026-02-01T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.521266 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 09:13:01.252542572 +0000 UTC Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.558852 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.558905 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.558922 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.558944 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.558960 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:56Z","lastTransitionTime":"2026-02-01T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.566397 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:56 crc kubenswrapper[4835]: E0201 07:22:56.566612 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.662019 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.662073 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.662089 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.662113 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.662132 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:56Z","lastTransitionTime":"2026-02-01T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.764532 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.764583 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.764599 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.764622 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.764637 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:56Z","lastTransitionTime":"2026-02-01T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.866998 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.867088 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.867111 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.867142 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.867169 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:56Z","lastTransitionTime":"2026-02-01T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.970466 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.970532 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.970550 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.970574 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:56 crc kubenswrapper[4835]: I0201 07:22:56.970592 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:56Z","lastTransitionTime":"2026-02-01T07:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.073278 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.073353 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.073379 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.073444 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.073475 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:57Z","lastTransitionTime":"2026-02-01T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.177140 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.177210 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.177230 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.177254 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.177271 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:57Z","lastTransitionTime":"2026-02-01T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.280231 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.280314 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.280340 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.280371 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.280395 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:57Z","lastTransitionTime":"2026-02-01T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.383867 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.383912 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.383923 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.383940 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.383957 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:57Z","lastTransitionTime":"2026-02-01T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.487510 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.487588 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.487611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.487636 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.487653 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:57Z","lastTransitionTime":"2026-02-01T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.522033 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 08:48:51.095426439 +0000 UTC Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.566067 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.566099 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:57 crc kubenswrapper[4835]: E0201 07:22:57.566335 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:57 crc kubenswrapper[4835]: E0201 07:22:57.566496 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.566584 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:57 crc kubenswrapper[4835]: E0201 07:22:57.566731 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.590530 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.590611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.590635 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.590665 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.590689 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:57Z","lastTransitionTime":"2026-02-01T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.600339 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.620489 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.638397 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.656810 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.690573 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:44Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.911996 6313 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912070 6313 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912661 6313 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:44.912696 6313 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.912825 6313 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.914511 6313 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:44.914576 6313 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:44.914655 6313 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:44.914663 6313 factory.go:656] Stopping watch factory\\\\nI0201 07:22:44.914682 6313 ovnkube.go:599] Stopped ovnkube\\\\nI0201 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.695584 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.695644 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.695661 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.695686 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.695704 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:57Z","lastTransitionTime":"2026-02-01T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.708039 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.726557 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.746622 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.765181 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.784015 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.798631 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.798683 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.798701 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.798725 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.798743 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:57Z","lastTransitionTime":"2026-02-01T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.810107 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.831155 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.851586 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.873212 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.888844 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.901735 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.901805 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.901830 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.901860 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.901885 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:57Z","lastTransitionTime":"2026-02-01T07:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:57 crc kubenswrapper[4835]: I0201 07:22:57.907624 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:57Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.005509 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.005578 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.005602 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.005633 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.005654 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:58Z","lastTransitionTime":"2026-02-01T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.108562 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.108622 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.108641 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.108669 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.108687 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:58Z","lastTransitionTime":"2026-02-01T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.212312 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.212376 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.212389 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.212443 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.212460 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:58Z","lastTransitionTime":"2026-02-01T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.315925 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.315994 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.316014 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.316043 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.316067 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:58Z","lastTransitionTime":"2026-02-01T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.419541 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.419610 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.419628 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.419656 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.419680 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:58Z","lastTransitionTime":"2026-02-01T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.522276 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 16:55:58.65557722 +0000 UTC Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.522933 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.522970 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.522982 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.522998 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.523011 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:58Z","lastTransitionTime":"2026-02-01T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.565757 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:58 crc kubenswrapper[4835]: E0201 07:22:58.565944 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.567318 4835 scope.go:117] "RemoveContainer" containerID="31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.626299 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.626704 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.626725 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.626754 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.626778 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:58Z","lastTransitionTime":"2026-02-01T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.729925 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.729980 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.729997 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.730020 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.730038 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:58Z","lastTransitionTime":"2026-02-01T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.849564 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.849612 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.849629 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.849665 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.849684 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:58Z","lastTransitionTime":"2026-02-01T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.927978 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/1.log" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.931364 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerStarted","Data":"fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa"} Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.931928 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.954054 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.954115 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.954145 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.954163 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.954174 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:58Z","lastTransitionTime":"2026-02-01T07:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.956694 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:44Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.911996 6313 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912070 6313 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912661 6313 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:44.912696 6313 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.912825 6313 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.914511 6313 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:44.914576 6313 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:44.914655 6313 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:44.914663 6313 factory.go:656] Stopping watch factory\\\\nI0201 07:22:44.914682 6313 ovnkube.go:599] Stopped ovnkube\\\\nI0201 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:58Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.972698 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:58Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:58 crc kubenswrapper[4835]: I0201 07:22:58.987267 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:58Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.003779 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.022223 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.049472 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.057269 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.057307 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.057321 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.057340 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.057356 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:59Z","lastTransitionTime":"2026-02-01T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.073744 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.094097 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.127677 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.137498 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.148892 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.160221 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.160277 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.160295 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.160318 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.160337 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:59Z","lastTransitionTime":"2026-02-01T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.163439 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.182756 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.205350 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.223458 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.237252 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.263381 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.263451 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.263467 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.263484 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.263497 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:59Z","lastTransitionTime":"2026-02-01T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.365702 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.365764 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.365785 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.365811 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.365830 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:59Z","lastTransitionTime":"2026-02-01T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.374347 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.374648 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:23:31.37462319 +0000 UTC m=+84.495059664 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.468926 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.468993 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.469015 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.469040 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.469059 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:59Z","lastTransitionTime":"2026-02-01T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.475875 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.475955 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.475998 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.476032 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.476219 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.476244 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.476262 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.476324 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-01 07:23:31.476303887 +0000 UTC m=+84.596740361 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.476733 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.476787 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.476813 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.476825 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.476838 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:23:31.47680658 +0000 UTC m=+84.597243054 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.476836 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.476862 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-01 07:23:31.476850291 +0000 UTC m=+84.597286735 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.476969 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:23:31.476941253 +0000 UTC m=+84.597377717 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.522931 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 23:09:21.587737161 +0000 UTC Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.566345 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.566470 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.566343 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.566608 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.566751 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.566880 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.571121 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.571182 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.571202 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.571228 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.571245 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:59Z","lastTransitionTime":"2026-02-01T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.674327 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.674374 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.674385 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.674402 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.674439 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:59Z","lastTransitionTime":"2026-02-01T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.778259 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.778314 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.778333 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.778355 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.778373 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:59Z","lastTransitionTime":"2026-02-01T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.880905 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.880982 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.881003 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.881027 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.881044 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:59Z","lastTransitionTime":"2026-02-01T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.937780 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/2.log" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.938769 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/1.log" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.942322 4835 generic.go:334] "Generic (PLEG): container finished" podID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerID="fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa" exitCode=1 Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.942388 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa"} Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.942478 4835 scope.go:117] "RemoveContainer" containerID="31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.943333 4835 scope.go:117] "RemoveContainer" containerID="fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa" Feb 01 07:22:59 crc kubenswrapper[4835]: E0201 07:22:59.943609 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.972819 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.983985 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.984050 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.984062 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.984078 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.984089 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:22:59Z","lastTransitionTime":"2026-02-01T07:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.988988 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:22:59 crc kubenswrapper[4835]: I0201 07:22:59.999848 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.008965 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.021437 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.035494 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.048050 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.058052 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.080599 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31498144fcdf8faef1d9db48aa755bf14ac3670c5f3cbe97cc2fc4f5afb19732\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:44Z\\\",\\\"message\\\":\\\"ute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.911996 6313 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912070 6313 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:22:44.912661 6313 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:22:44.912696 6313 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.912825 6313 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0201 07:22:44.914511 6313 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0201 07:22:44.914576 6313 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:22:44.914655 6313 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0201 07:22:44.914663 6313 factory.go:656] Stopping watch factory\\\\nI0201 07:22:44.914682 6313 ovnkube.go:599] Stopped ovnkube\\\\nI0201 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:59Z\\\",\\\"message\\\":\\\"tialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z]\\\\nI0201 07:22:59.601127 6506 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 07:22:59.601170 6506 services_controller.go:444] Built service openshift-kub\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.086341 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.086401 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.086443 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.086468 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.086486 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:00Z","lastTransitionTime":"2026-02-01T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.093189 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.104654 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.117650 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.133093 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.153244 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.176312 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.189285 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.189370 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.189392 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.189498 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.189534 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:00Z","lastTransitionTime":"2026-02-01T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.197794 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.291964 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.292032 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.292047 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.292069 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.292084 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:00Z","lastTransitionTime":"2026-02-01T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.395317 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.395379 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.395396 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.395477 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.395504 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:00Z","lastTransitionTime":"2026-02-01T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.498172 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.498237 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.498254 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.498278 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.498296 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:00Z","lastTransitionTime":"2026-02-01T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.523609 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 05:14:30.12398468 +0000 UTC Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.566153 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:00 crc kubenswrapper[4835]: E0201 07:23:00.566396 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.600854 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.600974 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.600995 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.601020 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.601038 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:00Z","lastTransitionTime":"2026-02-01T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.704045 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.704102 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.704119 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.704142 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.704159 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:00Z","lastTransitionTime":"2026-02-01T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.806886 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.806936 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.806956 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.806981 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.806998 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:00Z","lastTransitionTime":"2026-02-01T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.910098 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.910173 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.910196 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.910224 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.910242 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:00Z","lastTransitionTime":"2026-02-01T07:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.947947 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/2.log" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.953018 4835 scope.go:117] "RemoveContainer" containerID="fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa" Feb 01 07:23:00 crc kubenswrapper[4835]: E0201 07:23:00.953261 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" Feb 01 07:23:00 crc kubenswrapper[4835]: I0201 07:23:00.972267 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:00Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.003713 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:59Z\\\",\\\"message\\\":\\\"tialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z]\\\\nI0201 07:22:59.601127 6506 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 07:22:59.601170 6506 services_controller.go:444] Built service openshift-kub\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.013497 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.013568 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.013589 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.013613 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.013630 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:01Z","lastTransitionTime":"2026-02-01T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.020632 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.038624 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.059144 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.077617 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.098917 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.115766 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.115841 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.115866 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.115914 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.115939 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:01Z","lastTransitionTime":"2026-02-01T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.123242 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.143306 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.160598 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.176439 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.191798 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.206313 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.219348 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.219454 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.219480 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.219510 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.219535 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:01Z","lastTransitionTime":"2026-02-01T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.222512 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.243894 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.262887 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:01Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.322255 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.322320 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.322338 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.322361 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.322378 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:01Z","lastTransitionTime":"2026-02-01T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.426139 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.426197 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.426214 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.426238 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.426257 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:01Z","lastTransitionTime":"2026-02-01T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.524622 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 14:38:17.888533582 +0000 UTC Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.529064 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.529155 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.529182 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.529213 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.529235 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:01Z","lastTransitionTime":"2026-02-01T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.566395 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.566449 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.566402 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:01 crc kubenswrapper[4835]: E0201 07:23:01.566601 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:01 crc kubenswrapper[4835]: E0201 07:23:01.566803 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:01 crc kubenswrapper[4835]: E0201 07:23:01.566966 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.632931 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.632994 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.633010 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.633035 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.633053 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:01Z","lastTransitionTime":"2026-02-01T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.735829 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.735927 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.735945 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.735968 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.735986 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:01Z","lastTransitionTime":"2026-02-01T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.839241 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.839348 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.839377 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.839494 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.839519 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:01Z","lastTransitionTime":"2026-02-01T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.942596 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.942704 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.942762 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.942819 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:01 crc kubenswrapper[4835]: I0201 07:23:01.942838 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:01Z","lastTransitionTime":"2026-02-01T07:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.046282 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.046322 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.046334 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.046350 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.046362 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:02Z","lastTransitionTime":"2026-02-01T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.149468 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.149528 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.149545 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.149569 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.149586 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:02Z","lastTransitionTime":"2026-02-01T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.252946 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.253010 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.253035 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.253065 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.253088 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:02Z","lastTransitionTime":"2026-02-01T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.356123 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.356190 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.356210 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.356235 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.356251 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:02Z","lastTransitionTime":"2026-02-01T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.459177 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.459240 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.459262 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.459291 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.459312 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:02Z","lastTransitionTime":"2026-02-01T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.524999 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 22:30:56.261003884 +0000 UTC Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.562258 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.562321 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.562338 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.562361 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.562379 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:02Z","lastTransitionTime":"2026-02-01T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.566584 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:02 crc kubenswrapper[4835]: E0201 07:23:02.566767 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.665658 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.665736 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.665758 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.665785 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.665802 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:02Z","lastTransitionTime":"2026-02-01T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.768664 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.768737 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.768761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.768795 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.768819 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:02Z","lastTransitionTime":"2026-02-01T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.872304 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.872486 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.872523 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.872553 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.872575 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:02Z","lastTransitionTime":"2026-02-01T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.975363 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.975462 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.975481 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.975508 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:02 crc kubenswrapper[4835]: I0201 07:23:02.975526 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:02Z","lastTransitionTime":"2026-02-01T07:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.079478 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.079566 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.079589 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.079620 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.079643 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:03Z","lastTransitionTime":"2026-02-01T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.182074 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.182167 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.182185 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.182210 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.182227 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:03Z","lastTransitionTime":"2026-02-01T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.285751 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.285841 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.285863 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.285898 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.285922 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:03Z","lastTransitionTime":"2026-02-01T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.389709 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.389772 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.389795 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.389829 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.389852 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:03Z","lastTransitionTime":"2026-02-01T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.493312 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.493377 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.493395 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.493454 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.493474 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:03Z","lastTransitionTime":"2026-02-01T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.515333 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:03 crc kubenswrapper[4835]: E0201 07:23:03.515629 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:23:03 crc kubenswrapper[4835]: E0201 07:23:03.515732 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs podName:caf346fd-1c47-4f35-a5e6-79f7ac8fcafe nodeName:}" failed. No retries permitted until 2026-02-01 07:23:19.515705022 +0000 UTC m=+72.636141486 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs") pod "network-metrics-daemon-2msm5" (UID: "caf346fd-1c47-4f35-a5e6-79f7ac8fcafe") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.525967 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 19:44:26.850472174 +0000 UTC Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.566458 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.566576 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.566658 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:03 crc kubenswrapper[4835]: E0201 07:23:03.566783 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:03 crc kubenswrapper[4835]: E0201 07:23:03.567075 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:03 crc kubenswrapper[4835]: E0201 07:23:03.567165 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.595916 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.595983 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.596003 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.596028 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.596045 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:03Z","lastTransitionTime":"2026-02-01T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.699508 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.699651 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.699678 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.699709 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.699733 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:03Z","lastTransitionTime":"2026-02-01T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.802543 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.802618 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.802634 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.802658 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.802675 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:03Z","lastTransitionTime":"2026-02-01T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.860749 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.878939 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.885674 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:03Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.906099 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:03Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.906774 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.906826 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.906844 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.906868 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.906885 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:03Z","lastTransitionTime":"2026-02-01T07:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.924987 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:03Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.957080 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:59Z\\\",\\\"message\\\":\\\"tialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z]\\\\nI0201 07:22:59.601127 6506 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 07:22:59.601170 6506 services_controller.go:444] Built service openshift-kub\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:03Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.974361 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:03Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:03 crc kubenswrapper[4835]: I0201 07:23:03.993082 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:03Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.009527 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.009614 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.009632 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.009656 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.009678 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.012385 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.031594 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.051699 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.075826 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.078808 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.078899 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.078927 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.078952 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.078972 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.097389 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: E0201 07:23:04.101655 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.106951 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.107002 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.107024 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.107050 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.107069 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.118554 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: E0201 07:23:04.127178 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.131794 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.131866 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.131889 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.131919 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.131940 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.135646 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.150168 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: E0201 07:23:04.151147 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.156724 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.156789 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.156805 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.156827 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.156843 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.168707 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: E0201 07:23:04.177587 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.182656 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.182742 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.182760 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.182786 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.182806 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.189454 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: E0201 07:23:04.203622 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:04Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:04 crc kubenswrapper[4835]: E0201 07:23:04.204250 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.206991 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.207036 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.207054 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.207088 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.207108 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.310547 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.310631 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.310655 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.310685 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.310710 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.414158 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.414224 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.414240 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.414263 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.414280 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.517686 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.517742 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.517757 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.517779 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.517796 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.526382 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 23:18:11.319727901 +0000 UTC Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.565899 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:04 crc kubenswrapper[4835]: E0201 07:23:04.566057 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.620246 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.620310 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.620327 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.620352 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.620371 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.724039 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.724195 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.724220 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.724245 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.724266 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.827665 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.827739 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.827761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.827791 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.827817 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.930069 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.930123 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.930140 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.930165 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:04 crc kubenswrapper[4835]: I0201 07:23:04.930184 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:04Z","lastTransitionTime":"2026-02-01T07:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.033544 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.033604 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.033621 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.033643 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.033660 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:05Z","lastTransitionTime":"2026-02-01T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.136355 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.136461 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.136486 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.136517 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.136541 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:05Z","lastTransitionTime":"2026-02-01T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.239606 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.239678 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.239700 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.239732 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.239755 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:05Z","lastTransitionTime":"2026-02-01T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.343036 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.343092 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.343117 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.343144 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.343165 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:05Z","lastTransitionTime":"2026-02-01T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.447185 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.447236 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.447256 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.447279 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.447296 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:05Z","lastTransitionTime":"2026-02-01T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.526718 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 04:04:46.050442486 +0000 UTC Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.551108 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.551181 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.551206 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.551238 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.551263 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:05Z","lastTransitionTime":"2026-02-01T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.566616 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.566622 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:05 crc kubenswrapper[4835]: E0201 07:23:05.566881 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.566623 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:05 crc kubenswrapper[4835]: E0201 07:23:05.567012 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:05 crc kubenswrapper[4835]: E0201 07:23:05.567191 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.653947 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.654007 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.654024 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.654048 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.654067 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:05Z","lastTransitionTime":"2026-02-01T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.756529 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.756589 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.756606 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.756632 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.756652 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:05Z","lastTransitionTime":"2026-02-01T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.860139 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.860203 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.860220 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.860248 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.860264 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:05Z","lastTransitionTime":"2026-02-01T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.963494 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.963542 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.963559 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.963581 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:05 crc kubenswrapper[4835]: I0201 07:23:05.963599 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:05Z","lastTransitionTime":"2026-02-01T07:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.066836 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.066891 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.066906 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.066927 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.066941 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:06Z","lastTransitionTime":"2026-02-01T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.170572 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.170638 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.170655 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.170680 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.170698 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:06Z","lastTransitionTime":"2026-02-01T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.273948 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.274022 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.274045 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.274076 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.274104 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:06Z","lastTransitionTime":"2026-02-01T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.376815 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.376876 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.376892 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.376934 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.376951 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:06Z","lastTransitionTime":"2026-02-01T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.480212 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.480262 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.480279 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.480302 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.480323 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:06Z","lastTransitionTime":"2026-02-01T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.527843 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 01:30:43.940132067 +0000 UTC Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.566683 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:06 crc kubenswrapper[4835]: E0201 07:23:06.566881 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.583052 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.583104 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.583121 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.583144 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.583161 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:06Z","lastTransitionTime":"2026-02-01T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.687027 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.687081 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.687097 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.687140 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.687158 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:06Z","lastTransitionTime":"2026-02-01T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.790740 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.790825 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.790905 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.790985 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.791007 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:06Z","lastTransitionTime":"2026-02-01T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.894552 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.894618 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.894638 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.894666 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.894687 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:06Z","lastTransitionTime":"2026-02-01T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.997698 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.997760 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.997776 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.997800 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:06 crc kubenswrapper[4835]: I0201 07:23:06.997818 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:06Z","lastTransitionTime":"2026-02-01T07:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.100711 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.100773 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.100790 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.100816 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.100836 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:07Z","lastTransitionTime":"2026-02-01T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.203679 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.203739 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.203757 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.203782 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.203798 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:07Z","lastTransitionTime":"2026-02-01T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.306209 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.306239 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.306247 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.306259 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.306268 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:07Z","lastTransitionTime":"2026-02-01T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.408929 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.409007 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.409025 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.409047 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.409064 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:07Z","lastTransitionTime":"2026-02-01T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.513842 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.513892 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.513909 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.513934 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.513953 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:07Z","lastTransitionTime":"2026-02-01T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.528874 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 03:20:05.974152274 +0000 UTC Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.565758 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.565828 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:07 crc kubenswrapper[4835]: E0201 07:23:07.565984 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.566096 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:07 crc kubenswrapper[4835]: E0201 07:23:07.566317 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:07 crc kubenswrapper[4835]: E0201 07:23:07.566574 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.585287 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.613729 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.616321 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.616369 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.616387 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.616437 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.616455 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:07Z","lastTransitionTime":"2026-02-01T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.632220 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f9c91a-7450-4939-9808-dcc21d2eeb96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c45e8c9e136e58b6b6bb296a7160f5e02b57236f1c2fec30df8628b803df0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0882033ebccd13ec096ebe93d0abb367ea7c2b49ee4571850502dc9959be81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3389072313e3af0af04da04d8eb480cbb1611704cb5817a82cc66b8c9d90063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.652299 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.673356 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.704844 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:59Z\\\",\\\"message\\\":\\\"tialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z]\\\\nI0201 07:22:59.601127 6506 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 07:22:59.601170 6506 services_controller.go:444] Built service openshift-kub\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.718983 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.719158 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.719273 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.719382 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.719513 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:07Z","lastTransitionTime":"2026-02-01T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.724270 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.742498 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.762631 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.783388 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.806575 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.823315 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.823380 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.823402 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.823469 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.823497 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:07Z","lastTransitionTime":"2026-02-01T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.834130 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.854729 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.869948 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.887850 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.902649 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.918615 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:07Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.926450 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.926501 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.926513 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.926531 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:07 crc kubenswrapper[4835]: I0201 07:23:07.926543 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:07Z","lastTransitionTime":"2026-02-01T07:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.030013 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.030189 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.030304 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.030384 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.030456 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:08Z","lastTransitionTime":"2026-02-01T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.133058 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.133158 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.133177 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.133237 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.133256 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:08Z","lastTransitionTime":"2026-02-01T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.236708 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.236788 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.236811 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.236843 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.236865 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:08Z","lastTransitionTime":"2026-02-01T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.340609 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.340669 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.340688 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.340712 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.340730 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:08Z","lastTransitionTime":"2026-02-01T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.443148 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.443218 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.443242 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.443280 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.443304 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:08Z","lastTransitionTime":"2026-02-01T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.530003 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 10:22:54.872935706 +0000 UTC Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.546490 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.546566 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.546588 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.546619 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.546639 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:08Z","lastTransitionTime":"2026-02-01T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.566195 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:08 crc kubenswrapper[4835]: E0201 07:23:08.566359 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.650179 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.650286 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.650310 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.650346 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.650370 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:08Z","lastTransitionTime":"2026-02-01T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.758200 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.758639 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.758795 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.758952 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.759081 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:08Z","lastTransitionTime":"2026-02-01T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.861723 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.861799 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.861817 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.861844 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.861864 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:08Z","lastTransitionTime":"2026-02-01T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.964968 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.965023 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.965042 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.965069 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:08 crc kubenswrapper[4835]: I0201 07:23:08.965086 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:08Z","lastTransitionTime":"2026-02-01T07:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.068027 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.068111 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.068136 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.068162 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.068186 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:09Z","lastTransitionTime":"2026-02-01T07:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.170993 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.171391 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.171587 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.171717 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.171855 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:09Z","lastTransitionTime":"2026-02-01T07:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.274994 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.275073 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.275097 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.275124 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.275145 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:09Z","lastTransitionTime":"2026-02-01T07:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.378127 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.378185 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.378201 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.378226 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.378242 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:09Z","lastTransitionTime":"2026-02-01T07:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.481026 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.481264 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.481404 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.481604 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.481744 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:09Z","lastTransitionTime":"2026-02-01T07:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.530952 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 06:43:47.228509642 +0000 UTC Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.566664 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:09 crc kubenswrapper[4835]: E0201 07:23:09.567017 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.566798 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.566747 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:09 crc kubenswrapper[4835]: E0201 07:23:09.567700 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:09 crc kubenswrapper[4835]: E0201 07:23:09.567789 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.584278 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.584341 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.584359 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.584383 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.584401 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:09Z","lastTransitionTime":"2026-02-01T07:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.694162 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.694231 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.694249 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.694272 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.694289 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:09Z","lastTransitionTime":"2026-02-01T07:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.797691 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.797766 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.797803 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.797840 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.797864 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:09Z","lastTransitionTime":"2026-02-01T07:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.900273 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.900577 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.900676 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.900777 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:09 crc kubenswrapper[4835]: I0201 07:23:09.900873 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:09Z","lastTransitionTime":"2026-02-01T07:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.003750 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.003818 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.003837 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.003860 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.003878 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:10Z","lastTransitionTime":"2026-02-01T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.106839 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.106912 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.106933 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.106962 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.106984 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:10Z","lastTransitionTime":"2026-02-01T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.210063 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.210141 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.210165 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.210193 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.210211 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:10Z","lastTransitionTime":"2026-02-01T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.312094 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.312147 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.312164 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.312186 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.312203 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:10Z","lastTransitionTime":"2026-02-01T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.415458 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.415493 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.415504 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.415520 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.415530 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:10Z","lastTransitionTime":"2026-02-01T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.518725 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.519065 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.519192 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.519351 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.519527 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:10Z","lastTransitionTime":"2026-02-01T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.531351 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 16:30:43.513690384 +0000 UTC Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.565942 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:10 crc kubenswrapper[4835]: E0201 07:23:10.566153 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.622578 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.622644 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.622662 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.622687 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.622704 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:10Z","lastTransitionTime":"2026-02-01T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.725204 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.725264 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.725280 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.725308 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.725324 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:10Z","lastTransitionTime":"2026-02-01T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.828540 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.829060 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.829158 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.829265 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.829366 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:10Z","lastTransitionTime":"2026-02-01T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.932986 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.933036 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.933051 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.933071 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:10 crc kubenswrapper[4835]: I0201 07:23:10.933084 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:10Z","lastTransitionTime":"2026-02-01T07:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.035658 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.035712 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.035725 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.035747 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.035763 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:11Z","lastTransitionTime":"2026-02-01T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.138150 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.138207 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.138224 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.138246 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.138265 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:11Z","lastTransitionTime":"2026-02-01T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.246011 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.246084 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.246103 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.246127 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.246147 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:11Z","lastTransitionTime":"2026-02-01T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.349526 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.349574 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.349586 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.349602 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.349614 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:11Z","lastTransitionTime":"2026-02-01T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.452689 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.452731 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.452742 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.452758 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.452770 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:11Z","lastTransitionTime":"2026-02-01T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.532340 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 17:28:49.059362321 +0000 UTC Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.555286 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.555333 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.555344 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.555360 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.555372 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:11Z","lastTransitionTime":"2026-02-01T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.565810 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.565812 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:11 crc kubenswrapper[4835]: E0201 07:23:11.566114 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.566144 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:11 crc kubenswrapper[4835]: E0201 07:23:11.566372 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:11 crc kubenswrapper[4835]: E0201 07:23:11.566640 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.567559 4835 scope.go:117] "RemoveContainer" containerID="fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa" Feb 01 07:23:11 crc kubenswrapper[4835]: E0201 07:23:11.567892 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.659367 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.659486 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.659512 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.659549 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.659575 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:11Z","lastTransitionTime":"2026-02-01T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.762534 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.762600 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.762620 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.762645 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.762662 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:11Z","lastTransitionTime":"2026-02-01T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.865488 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.865561 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.865579 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.865603 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.865620 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:11Z","lastTransitionTime":"2026-02-01T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.968611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.968667 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.968684 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.968706 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:11 crc kubenswrapper[4835]: I0201 07:23:11.968723 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:11Z","lastTransitionTime":"2026-02-01T07:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.072225 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.072286 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.072302 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.072326 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.072346 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:12Z","lastTransitionTime":"2026-02-01T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.175167 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.175214 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.175227 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.175244 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.175258 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:12Z","lastTransitionTime":"2026-02-01T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.278796 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.278869 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.278888 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.278914 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.278936 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:12Z","lastTransitionTime":"2026-02-01T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.382079 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.382131 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.382144 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.382162 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.382175 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:12Z","lastTransitionTime":"2026-02-01T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.485304 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.485350 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.485362 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.485379 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.485391 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:12Z","lastTransitionTime":"2026-02-01T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.532796 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 17:34:05.416423956 +0000 UTC Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.566085 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:12 crc kubenswrapper[4835]: E0201 07:23:12.566233 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.587793 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.587842 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.587853 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.587872 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.587886 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:12Z","lastTransitionTime":"2026-02-01T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.690305 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.690366 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.690391 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.690445 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.690463 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:12Z","lastTransitionTime":"2026-02-01T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.792807 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.792838 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.792848 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.792860 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.792870 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:12Z","lastTransitionTime":"2026-02-01T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.894289 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.894359 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.894375 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.894400 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.894445 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:12Z","lastTransitionTime":"2026-02-01T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.997269 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.997314 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.997323 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.997340 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:12 crc kubenswrapper[4835]: I0201 07:23:12.997351 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:12Z","lastTransitionTime":"2026-02-01T07:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.099900 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.099946 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.099955 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.099971 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.099982 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:13Z","lastTransitionTime":"2026-02-01T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.202107 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.202151 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.202162 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.202178 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.202189 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:13Z","lastTransitionTime":"2026-02-01T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.304855 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.304901 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.304912 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.304929 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.304939 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:13Z","lastTransitionTime":"2026-02-01T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.407476 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.407524 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.407536 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.407555 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.407568 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:13Z","lastTransitionTime":"2026-02-01T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.510161 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.510201 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.510211 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.510229 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.510242 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:13Z","lastTransitionTime":"2026-02-01T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.533848 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 05:31:07.154502441 +0000 UTC Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.566185 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.566206 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.566247 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:13 crc kubenswrapper[4835]: E0201 07:23:13.566285 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:13 crc kubenswrapper[4835]: E0201 07:23:13.566560 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:13 crc kubenswrapper[4835]: E0201 07:23:13.566620 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.612197 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.612275 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.612298 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.612332 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.612355 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:13Z","lastTransitionTime":"2026-02-01T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.714817 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.714848 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.714856 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.714869 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.714879 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:13Z","lastTransitionTime":"2026-02-01T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.817687 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.817799 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.817827 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.817862 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.817887 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:13Z","lastTransitionTime":"2026-02-01T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.920687 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.920746 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.920764 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.920788 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:13 crc kubenswrapper[4835]: I0201 07:23:13.920805 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:13Z","lastTransitionTime":"2026-02-01T07:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.023173 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.023221 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.023241 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.023264 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.023280 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.126590 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.126656 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.126679 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.126710 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.126730 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.229487 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.229533 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.229565 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.229583 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.229596 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.282701 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.282751 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.282768 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.282793 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.282810 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: E0201 07:23:14.296096 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:14Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.300013 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.300058 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.300076 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.300098 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.300115 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: E0201 07:23:14.317378 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:14Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.320774 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.321007 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.321220 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.321454 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.321668 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: E0201 07:23:14.334846 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:14Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.338500 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.338569 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.338592 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.338619 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.338640 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: E0201 07:23:14.352382 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:14Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.357090 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.357164 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.357181 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.357203 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.357220 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: E0201 07:23:14.369279 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:14Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:14 crc kubenswrapper[4835]: E0201 07:23:14.369451 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.371124 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.371315 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.371493 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.371704 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.371892 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.474568 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.474945 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.475145 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.475354 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.475573 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.534365 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 00:11:43.875353641 +0000 UTC Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.565862 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:14 crc kubenswrapper[4835]: E0201 07:23:14.566181 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.578548 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.578586 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.578597 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.578611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.578622 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.680396 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.680466 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.680479 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.680496 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.680507 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.782910 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.783112 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.783194 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.783275 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.783354 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.885316 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.885367 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.885385 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.885451 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.885474 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.987777 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.988034 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.988161 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.988263 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:14 crc kubenswrapper[4835]: I0201 07:23:14.988340 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:14Z","lastTransitionTime":"2026-02-01T07:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.090583 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.090614 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.090624 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.090638 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.090647 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:15Z","lastTransitionTime":"2026-02-01T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.194193 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.194229 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.194242 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.194257 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.194267 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:15Z","lastTransitionTime":"2026-02-01T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.298233 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.298385 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.298401 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.298443 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.298463 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:15Z","lastTransitionTime":"2026-02-01T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.401152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.401198 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.401209 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.401225 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.401236 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:15Z","lastTransitionTime":"2026-02-01T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.503766 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.503800 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.503812 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.503827 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.503837 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:15Z","lastTransitionTime":"2026-02-01T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.535671 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 20:49:33.645495562 +0000 UTC Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.566354 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:15 crc kubenswrapper[4835]: E0201 07:23:15.566470 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.566480 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.566575 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:15 crc kubenswrapper[4835]: E0201 07:23:15.566742 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:15 crc kubenswrapper[4835]: E0201 07:23:15.566844 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.605694 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.605724 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.605733 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.605747 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.605758 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:15Z","lastTransitionTime":"2026-02-01T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.707659 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.707727 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.707751 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.707775 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.707791 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:15Z","lastTransitionTime":"2026-02-01T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.809886 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.809954 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.809977 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.810005 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.810027 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:15Z","lastTransitionTime":"2026-02-01T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.912002 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.912081 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.912102 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.912128 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:15 crc kubenswrapper[4835]: I0201 07:23:15.912147 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:15Z","lastTransitionTime":"2026-02-01T07:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.014967 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.015017 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.015031 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.015049 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.015063 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:16Z","lastTransitionTime":"2026-02-01T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.118057 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.118130 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.118146 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.118172 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.118190 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:16Z","lastTransitionTime":"2026-02-01T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.221806 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.221858 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.221874 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.221895 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.221908 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:16Z","lastTransitionTime":"2026-02-01T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.324137 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.324199 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.324220 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.324247 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.324268 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:16Z","lastTransitionTime":"2026-02-01T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.426215 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.426285 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.426305 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.426329 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.426347 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:16Z","lastTransitionTime":"2026-02-01T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.528389 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.528468 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.528485 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.528505 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.528521 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:16Z","lastTransitionTime":"2026-02-01T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.536018 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 05:42:30.301849078 +0000 UTC Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.566618 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:16 crc kubenswrapper[4835]: E0201 07:23:16.566800 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.631863 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.631909 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.631921 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.631936 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.631946 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:16Z","lastTransitionTime":"2026-02-01T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.734392 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.734450 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.734462 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.734475 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.734485 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:16Z","lastTransitionTime":"2026-02-01T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.837608 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.837649 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.837662 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.837679 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.837691 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:16Z","lastTransitionTime":"2026-02-01T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.940513 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.940540 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.940552 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.940567 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:16 crc kubenswrapper[4835]: I0201 07:23:16.940576 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:16Z","lastTransitionTime":"2026-02-01T07:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.043676 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.043719 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.043730 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.043745 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.043756 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:17Z","lastTransitionTime":"2026-02-01T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.145651 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.145679 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.145690 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.145702 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.145713 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:17Z","lastTransitionTime":"2026-02-01T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.248080 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.248143 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.248164 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.248191 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.248215 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:17Z","lastTransitionTime":"2026-02-01T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.350394 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.350433 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.350441 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.350452 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.350461 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:17Z","lastTransitionTime":"2026-02-01T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.453199 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.453229 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.453241 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.453256 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.453266 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:17Z","lastTransitionTime":"2026-02-01T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.536955 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 02:25:51.29653789 +0000 UTC Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.555720 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.555750 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.555763 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.555781 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.555792 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:17Z","lastTransitionTime":"2026-02-01T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.566202 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.566257 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:17 crc kubenswrapper[4835]: E0201 07:23:17.566373 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.566471 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:17 crc kubenswrapper[4835]: E0201 07:23:17.566584 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:17 crc kubenswrapper[4835]: E0201 07:23:17.566651 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.580926 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.602788 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.616188 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.630666 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.643964 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.656890 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.658097 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.658138 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.658152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.658171 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.658182 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:17Z","lastTransitionTime":"2026-02-01T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.667763 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f9c91a-7450-4939-9808-dcc21d2eeb96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c45e8c9e136e58b6b6bb296a7160f5e02b57236f1c2fec30df8628b803df0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0882033ebccd13ec096ebe93d0abb367ea7c2b49ee4571850502dc9959be81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3389072313e3af0af04da04d8eb480cbb1611704cb5817a82cc66b8c9d90063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.679233 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.693721 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.711030 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.735838 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:59Z\\\",\\\"message\\\":\\\"tialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z]\\\\nI0201 07:22:59.601127 6506 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 07:22:59.601170 6506 services_controller.go:444] Built service openshift-kub\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.749777 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.761159 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.761217 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.761240 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.761268 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.761289 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:17Z","lastTransitionTime":"2026-02-01T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.766879 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.789405 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.813211 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.833975 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.854163 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:17Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.864207 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.864235 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.864244 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.864259 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.864270 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:17Z","lastTransitionTime":"2026-02-01T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.966514 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.966573 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.966590 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.966613 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:17 crc kubenswrapper[4835]: I0201 07:23:17.966633 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:17Z","lastTransitionTime":"2026-02-01T07:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.069377 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.069438 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.069450 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.069463 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.069473 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:18Z","lastTransitionTime":"2026-02-01T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.172631 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.172674 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.172683 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.172698 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.172707 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:18Z","lastTransitionTime":"2026-02-01T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.275668 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.275725 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.275745 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.275771 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.275789 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:18Z","lastTransitionTime":"2026-02-01T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.378870 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.378914 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.378922 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.378938 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.378947 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:18Z","lastTransitionTime":"2026-02-01T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.481246 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.481305 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.481325 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.481349 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.481367 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:18Z","lastTransitionTime":"2026-02-01T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.537468 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 22:39:46.54599595 +0000 UTC Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.565753 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:18 crc kubenswrapper[4835]: E0201 07:23:18.565915 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.586478 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.586512 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.586521 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.586535 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.586545 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:18Z","lastTransitionTime":"2026-02-01T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.689310 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.689372 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.689392 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.689442 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.689461 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:18Z","lastTransitionTime":"2026-02-01T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.791587 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.791621 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.791629 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.791642 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.791651 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:18Z","lastTransitionTime":"2026-02-01T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.893558 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.893616 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.893633 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.893656 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.893675 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:18Z","lastTransitionTime":"2026-02-01T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.996279 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.996337 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.996352 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.996375 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:18 crc kubenswrapper[4835]: I0201 07:23:18.996390 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:18Z","lastTransitionTime":"2026-02-01T07:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.098648 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.098715 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.098738 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.098760 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.098776 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:19Z","lastTransitionTime":"2026-02-01T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.201366 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.201398 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.201419 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.201431 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.201440 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:19Z","lastTransitionTime":"2026-02-01T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.304697 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.304739 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.304747 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.304761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.304770 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:19Z","lastTransitionTime":"2026-02-01T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.406944 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.406988 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.406996 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.407012 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.407025 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:19Z","lastTransitionTime":"2026-02-01T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.509812 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.509845 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.509853 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.509865 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.509873 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:19Z","lastTransitionTime":"2026-02-01T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.538238 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 15:16:27.286561508 +0000 UTC Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.566756 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.566796 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:19 crc kubenswrapper[4835]: E0201 07:23:19.566888 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.566924 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:19 crc kubenswrapper[4835]: E0201 07:23:19.567098 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:19 crc kubenswrapper[4835]: E0201 07:23:19.567189 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.586379 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:19 crc kubenswrapper[4835]: E0201 07:23:19.586557 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:23:19 crc kubenswrapper[4835]: E0201 07:23:19.586615 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs podName:caf346fd-1c47-4f35-a5e6-79f7ac8fcafe nodeName:}" failed. No retries permitted until 2026-02-01 07:23:51.586599078 +0000 UTC m=+104.707035512 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs") pod "network-metrics-daemon-2msm5" (UID: "caf346fd-1c47-4f35-a5e6-79f7ac8fcafe") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.611998 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.612041 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.612053 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.612069 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.612084 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:19Z","lastTransitionTime":"2026-02-01T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.714989 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.715043 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.715061 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.715086 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.715105 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:19Z","lastTransitionTime":"2026-02-01T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.818751 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.818805 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.818826 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.818856 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.818881 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:19Z","lastTransitionTime":"2026-02-01T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.921008 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.921101 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.921114 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.921134 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:19 crc kubenswrapper[4835]: I0201 07:23:19.921152 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:19Z","lastTransitionTime":"2026-02-01T07:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.023905 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.023939 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.023950 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.023965 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.023977 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:20Z","lastTransitionTime":"2026-02-01T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.126384 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.126443 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.126457 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.126473 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.126483 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:20Z","lastTransitionTime":"2026-02-01T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.228594 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.228636 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.228648 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.228662 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.228670 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:20Z","lastTransitionTime":"2026-02-01T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.330919 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.330963 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.330971 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.330986 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.330995 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:20Z","lastTransitionTime":"2026-02-01T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.433128 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.433159 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.433170 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.433183 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.433192 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:20Z","lastTransitionTime":"2026-02-01T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.535905 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.535955 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.535972 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.535997 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.536014 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:20Z","lastTransitionTime":"2026-02-01T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.538958 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 08:27:25.172851841 +0000 UTC Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.566616 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:20 crc kubenswrapper[4835]: E0201 07:23:20.566740 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.638536 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.638605 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.638628 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.638652 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.638669 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:20Z","lastTransitionTime":"2026-02-01T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.741665 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.741711 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.741722 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.741768 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.741778 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:20Z","lastTransitionTime":"2026-02-01T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.844406 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.844462 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.844470 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.844484 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.844493 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:20Z","lastTransitionTime":"2026-02-01T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.946683 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.946736 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.946748 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.946766 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:20 crc kubenswrapper[4835]: I0201 07:23:20.946783 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:20Z","lastTransitionTime":"2026-02-01T07:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.020099 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-25s9j_c9342eb7-b5ae-47b2-a56d-91ae886e5f0e/kube-multus/0.log" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.020169 4835 generic.go:334] "Generic (PLEG): container finished" podID="c9342eb7-b5ae-47b2-a56d-91ae886e5f0e" containerID="213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd" exitCode=1 Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.020208 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-25s9j" event={"ID":"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e","Type":"ContainerDied","Data":"213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd"} Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.020727 4835 scope.go:117] "RemoveContainer" containerID="213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.037644 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.050199 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.050264 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.050283 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.050308 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.050326 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:21Z","lastTransitionTime":"2026-02-01T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.053355 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.071354 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.088862 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.110552 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.131553 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f9c91a-7450-4939-9808-dcc21d2eeb96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c45e8c9e136e58b6b6bb296a7160f5e02b57236f1c2fec30df8628b803df0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0882033ebccd13ec096ebe93d0abb367ea7c2b49ee4571850502dc9959be81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3389072313e3af0af04da04d8eb480cbb1611704cb5817a82cc66b8c9d90063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.147194 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.153362 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.153405 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.153440 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.153461 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.153474 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:21Z","lastTransitionTime":"2026-02-01T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.162397 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.175600 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.189281 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.202366 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.221284 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:59Z\\\",\\\"message\\\":\\\"tialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z]\\\\nI0201 07:22:59.601127 6506 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 07:22:59.601170 6506 services_controller.go:444] Built service openshift-kub\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.231252 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.243558 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.255142 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.255198 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.255207 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.255220 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.255230 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:21Z","lastTransitionTime":"2026-02-01T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.257013 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.274793 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.294970 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:23:20Z\\\",\\\"message\\\":\\\"2026-02-01T07:22:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d\\\\n2026-02-01T07:22:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d to /host/opt/cni/bin/\\\\n2026-02-01T07:22:34Z [verbose] multus-daemon started\\\\n2026-02-01T07:22:34Z [verbose] Readiness Indicator file check\\\\n2026-02-01T07:23:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:21Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.357486 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.357539 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.357555 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.357581 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.357598 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:21Z","lastTransitionTime":"2026-02-01T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.460139 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.460206 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.460226 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.460251 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.460268 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:21Z","lastTransitionTime":"2026-02-01T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.539319 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 07:54:24.876268939 +0000 UTC Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.562772 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.562831 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.562840 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.562855 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.562883 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:21Z","lastTransitionTime":"2026-02-01T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.566287 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:21 crc kubenswrapper[4835]: E0201 07:23:21.566404 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.566485 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:21 crc kubenswrapper[4835]: E0201 07:23:21.566603 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.566294 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:21 crc kubenswrapper[4835]: E0201 07:23:21.566718 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.665590 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.665654 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.665671 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.665695 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.665716 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:21Z","lastTransitionTime":"2026-02-01T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.768176 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.768265 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.768288 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.768319 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.768342 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:21Z","lastTransitionTime":"2026-02-01T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.872109 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.872153 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.872169 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.872194 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.872211 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:21Z","lastTransitionTime":"2026-02-01T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.975959 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.976038 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.976060 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.976089 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:21 crc kubenswrapper[4835]: I0201 07:23:21.976106 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:21Z","lastTransitionTime":"2026-02-01T07:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.034583 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-25s9j_c9342eb7-b5ae-47b2-a56d-91ae886e5f0e/kube-multus/0.log" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.034638 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-25s9j" event={"ID":"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e","Type":"ContainerStarted","Data":"c7f67e3606f318159aa33593125d45284e9277e6418b039476366b909aa6cf27"} Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.059080 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.075531 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f9c91a-7450-4939-9808-dcc21d2eeb96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c45e8c9e136e58b6b6bb296a7160f5e02b57236f1c2fec30df8628b803df0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0882033ebccd13ec096ebe93d0abb367ea7c2b49ee4571850502dc9959be81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3389072313e3af0af04da04d8eb480cbb1611704cb5817a82cc66b8c9d90063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.079127 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.079180 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.079194 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.079215 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.079231 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:22Z","lastTransitionTime":"2026-02-01T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.095360 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.110253 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.125822 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.141634 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:59Z\\\",\\\"message\\\":\\\"tialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z]\\\\nI0201 07:22:59.601127 6506 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 07:22:59.601170 6506 services_controller.go:444] Built service openshift-kub\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.151528 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.162798 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.174624 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.181595 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.181666 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.181682 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.181706 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.181726 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:22Z","lastTransitionTime":"2026-02-01T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.185112 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.196347 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f67e3606f318159aa33593125d45284e9277e6418b039476366b909aa6cf27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:23:20Z\\\",\\\"message\\\":\\\"2026-02-01T07:22:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d\\\\n2026-02-01T07:22:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d to /host/opt/cni/bin/\\\\n2026-02-01T07:22:34Z [verbose] multus-daemon started\\\\n2026-02-01T07:22:34Z [verbose] Readiness Indicator file check\\\\n2026-02-01T07:23:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.212594 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.228531 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.240240 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.254394 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.272908 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.284779 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.284829 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.284844 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.284863 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.284877 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:22Z","lastTransitionTime":"2026-02-01T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.286828 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:22Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.387508 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.387567 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.387584 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.387610 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.387628 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:22Z","lastTransitionTime":"2026-02-01T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.490371 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.490472 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.490489 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.490513 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.490529 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:22Z","lastTransitionTime":"2026-02-01T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.540302 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 13:36:38.250102985 +0000 UTC Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.565901 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:22 crc kubenswrapper[4835]: E0201 07:23:22.566262 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.566708 4835 scope.go:117] "RemoveContainer" containerID="fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.592726 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.592788 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.592804 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.592826 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.592843 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:22Z","lastTransitionTime":"2026-02-01T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.695798 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.695898 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.695937 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.695980 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.695999 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:22Z","lastTransitionTime":"2026-02-01T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.799265 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.799346 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.799368 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.799515 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.799562 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:22Z","lastTransitionTime":"2026-02-01T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.902359 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.902427 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.902441 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.902462 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:22 crc kubenswrapper[4835]: I0201 07:23:22.902477 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:22Z","lastTransitionTime":"2026-02-01T07:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.005984 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.006039 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.006052 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.006069 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.006460 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:23Z","lastTransitionTime":"2026-02-01T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.042252 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/2.log" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.047106 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerStarted","Data":"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe"} Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.048722 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.085331 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.105364 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f9c91a-7450-4939-9808-dcc21d2eeb96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c45e8c9e136e58b6b6bb296a7160f5e02b57236f1c2fec30df8628b803df0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0882033ebccd13ec096ebe93d0abb367ea7c2b49ee4571850502dc9959be81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3389072313e3af0af04da04d8eb480cbb1611704cb5817a82cc66b8c9d90063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.108874 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.108913 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.108927 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.108949 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.108964 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:23Z","lastTransitionTime":"2026-02-01T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.131197 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.151853 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.171813 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:59Z\\\",\\\"message\\\":\\\"tialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z]\\\\nI0201 07:22:59.601127 6506 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 07:22:59.601170 6506 services_controller.go:444] Built service openshift-kub\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.182194 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.191063 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.201310 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.210708 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.210741 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.210752 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.210768 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.210779 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:23Z","lastTransitionTime":"2026-02-01T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.214066 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.228517 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f67e3606f318159aa33593125d45284e9277e6418b039476366b909aa6cf27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:23:20Z\\\",\\\"message\\\":\\\"2026-02-01T07:22:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d\\\\n2026-02-01T07:22:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d to /host/opt/cni/bin/\\\\n2026-02-01T07:22:34Z [verbose] multus-daemon started\\\\n2026-02-01T07:22:34Z [verbose] Readiness Indicator file check\\\\n2026-02-01T07:23:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.241266 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.258506 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.270948 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.281908 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.293897 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.308234 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.313181 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.313220 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.313231 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.313249 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.313262 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:23Z","lastTransitionTime":"2026-02-01T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.323035 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:23Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.416610 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.416690 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.416715 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.416749 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.416774 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:23Z","lastTransitionTime":"2026-02-01T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.520259 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.520363 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.520387 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.520451 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.520480 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:23Z","lastTransitionTime":"2026-02-01T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.540970 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 14:58:13.918772562 +0000 UTC Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.566549 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.566583 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.566583 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:23 crc kubenswrapper[4835]: E0201 07:23:23.566831 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:23 crc kubenswrapper[4835]: E0201 07:23:23.567026 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:23 crc kubenswrapper[4835]: E0201 07:23:23.567202 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.622949 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.623001 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.623016 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.623035 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.623049 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:23Z","lastTransitionTime":"2026-02-01T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.725689 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.725739 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.725748 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.725761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.725773 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:23Z","lastTransitionTime":"2026-02-01T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.828764 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.828818 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.828835 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.828857 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.828874 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:23Z","lastTransitionTime":"2026-02-01T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.932247 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.932318 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.932343 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.932373 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:23 crc kubenswrapper[4835]: I0201 07:23:23.932395 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:23Z","lastTransitionTime":"2026-02-01T07:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.035364 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.035500 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.035521 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.035546 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.035563 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.053534 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/3.log" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.054622 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/2.log" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.058623 4835 generic.go:334] "Generic (PLEG): container finished" podID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerID="9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe" exitCode=1 Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.058688 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe"} Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.058737 4835 scope.go:117] "RemoveContainer" containerID="fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.060376 4835 scope.go:117] "RemoveContainer" containerID="9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe" Feb 01 07:23:24 crc kubenswrapper[4835]: E0201 07:23:24.061119 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.087259 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.103834 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f9c91a-7450-4939-9808-dcc21d2eeb96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c45e8c9e136e58b6b6bb296a7160f5e02b57236f1c2fec30df8628b803df0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0882033ebccd13ec096ebe93d0abb367ea7c2b49ee4571850502dc9959be81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3389072313e3af0af04da04d8eb480cbb1611704cb5817a82cc66b8c9d90063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.124077 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.138390 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.138465 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.138483 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.138505 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.138521 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.142684 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.160929 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.192000 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc43ac3779dd67ad98503c3707656dcab592b42a47b3fc3e07a7749bc1b9fcaa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:22:59Z\\\",\\\"message\\\":\\\"tialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:22:59Z is after 2025-08-24T17:21:41Z]\\\\nI0201 07:22:59.601127 6506 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-operator-lifecycle-manager/olm-operator-metrics]} name:Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.168:8443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {63b1440a-0908-4cab-8799-012fa1cf0b07}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0201 07:22:59.601170 6506 services_controller.go:444] Built service openshift-kub\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:23:23Z\\\",\\\"message\\\":\\\"7:23:23.631079 6884 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:23:23.631159 6884 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 07:23:23.631243 6884 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 07:23:23.631499 6884 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:23:23.632231 6884 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:23:23.632268 6884 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0201 07:23:23.632293 6884 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0201 07:23:23.632309 6884 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0201 07:23:23.632431 6884 factory.go:656] Stopping watch factory\\\\nI0201 07:23:23.632451 6884 ovnkube.go:599] Stopped ovnkube\\\\nI0201 07:23:23.632677 6884 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0201 07:23:23.632717 6884 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0201 07:23:23.632728 6884 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0201 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.206285 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.222982 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.241504 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.241566 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.241583 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.241611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.241632 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.241868 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.260451 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.280755 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f67e3606f318159aa33593125d45284e9277e6418b039476366b909aa6cf27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:23:20Z\\\",\\\"message\\\":\\\"2026-02-01T07:22:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d\\\\n2026-02-01T07:22:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d to /host/opt/cni/bin/\\\\n2026-02-01T07:22:34Z [verbose] multus-daemon started\\\\n2026-02-01T07:22:34Z [verbose] Readiness Indicator file check\\\\n2026-02-01T07:23:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.303596 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.323067 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.342029 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.344231 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.344273 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.344293 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.344322 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.344344 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.360155 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.376024 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.392373 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.446942 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.446992 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.447008 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.447030 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.447047 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.541403 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 21:23:46.06709514 +0000 UTC Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.549692 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.549749 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.549771 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.549830 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.549856 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.565731 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:24 crc kubenswrapper[4835]: E0201 07:23:24.565940 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.616518 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.616578 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.616596 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.616618 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.616634 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: E0201 07:23:24.634596 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.639391 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.639490 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.639516 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.639543 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.639564 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: E0201 07:23:24.661214 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.665474 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.665523 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.665540 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.665562 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.665582 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: E0201 07:23:24.685870 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.691022 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.691073 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.691089 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.691110 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.691126 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: E0201 07:23:24.711895 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.717058 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.717112 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.717129 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.717153 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.717171 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: E0201 07:23:24.737201 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:24Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:24 crc kubenswrapper[4835]: E0201 07:23:24.737453 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.739854 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.739905 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.739928 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.739955 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.739977 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.842888 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.842946 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.842963 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.842984 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.842999 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.946065 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.946118 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.946129 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.946149 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:24 crc kubenswrapper[4835]: I0201 07:23:24.946161 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:24Z","lastTransitionTime":"2026-02-01T07:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.055015 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.055090 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.055108 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.055132 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.055147 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:25Z","lastTransitionTime":"2026-02-01T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.065329 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/3.log" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.071713 4835 scope.go:117] "RemoveContainer" containerID="9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe" Feb 01 07:23:25 crc kubenswrapper[4835]: E0201 07:23:25.072010 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.094005 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.115962 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.138099 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f67e3606f318159aa33593125d45284e9277e6418b039476366b909aa6cf27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:23:20Z\\\",\\\"message\\\":\\\"2026-02-01T07:22:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d\\\\n2026-02-01T07:22:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d to /host/opt/cni/bin/\\\\n2026-02-01T07:22:34Z [verbose] multus-daemon started\\\\n2026-02-01T07:22:34Z [verbose] Readiness Indicator file check\\\\n2026-02-01T07:23:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.159095 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.159145 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.159159 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.159179 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.159192 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:25Z","lastTransitionTime":"2026-02-01T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.160260 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.177543 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.197346 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.211606 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.224265 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.238090 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.252649 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.261817 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.261907 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.261943 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.261973 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.261996 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:25Z","lastTransitionTime":"2026-02-01T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.265753 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.283724 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.301800 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f9c91a-7450-4939-9808-dcc21d2eeb96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c45e8c9e136e58b6b6bb296a7160f5e02b57236f1c2fec30df8628b803df0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0882033ebccd13ec096ebe93d0abb367ea7c2b49ee4571850502dc9959be81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3389072313e3af0af04da04d8eb480cbb1611704cb5817a82cc66b8c9d90063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.317369 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.338043 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:23:23Z\\\",\\\"message\\\":\\\"7:23:23.631079 6884 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:23:23.631159 6884 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 07:23:23.631243 6884 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 07:23:23.631499 6884 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:23:23.632231 6884 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:23:23.632268 6884 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0201 07:23:23.632293 6884 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0201 07:23:23.632309 6884 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0201 07:23:23.632431 6884 factory.go:656] Stopping watch factory\\\\nI0201 07:23:23.632451 6884 ovnkube.go:599] Stopped ovnkube\\\\nI0201 07:23:23.632677 6884 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0201 07:23:23.632717 6884 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0201 07:23:23.632728 6884 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0201 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:23:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.351705 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.364134 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.364180 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.364192 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.364208 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.364219 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:25Z","lastTransitionTime":"2026-02-01T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.366739 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:25Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.466642 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.466712 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.466730 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.466756 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.466775 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:25Z","lastTransitionTime":"2026-02-01T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.541881 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 23:05:48.97353426 +0000 UTC Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.567755 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.567845 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.567937 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:25 crc kubenswrapper[4835]: E0201 07:23:25.567944 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:25 crc kubenswrapper[4835]: E0201 07:23:25.568082 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:25 crc kubenswrapper[4835]: E0201 07:23:25.568163 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.569526 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.569579 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.569598 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.569623 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.569643 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:25Z","lastTransitionTime":"2026-02-01T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.671723 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.671790 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.671808 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.671833 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.671854 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:25Z","lastTransitionTime":"2026-02-01T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.773769 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.773813 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.773824 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.773841 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.773853 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:25Z","lastTransitionTime":"2026-02-01T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.876713 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.876770 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.876780 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.876794 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.876803 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:25Z","lastTransitionTime":"2026-02-01T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.979711 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.979765 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.979784 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.979808 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:25 crc kubenswrapper[4835]: I0201 07:23:25.979827 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:25Z","lastTransitionTime":"2026-02-01T07:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.082146 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.082592 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.082730 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.082761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.082780 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:26Z","lastTransitionTime":"2026-02-01T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.185828 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.186152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.186283 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.186624 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.186654 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:26Z","lastTransitionTime":"2026-02-01T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.290579 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.290651 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.290670 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.290695 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.290712 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:26Z","lastTransitionTime":"2026-02-01T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.393138 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.393200 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.393219 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.393244 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.393261 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:26Z","lastTransitionTime":"2026-02-01T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.496030 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.496316 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.496496 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.496630 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.496780 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:26Z","lastTransitionTime":"2026-02-01T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.542938 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 02:12:39.928042703 +0000 UTC Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.565722 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:26 crc kubenswrapper[4835]: E0201 07:23:26.565874 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.591028 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.600709 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.600765 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.600783 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.600806 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.600823 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:26Z","lastTransitionTime":"2026-02-01T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.703015 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.703067 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.703084 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.703108 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.703124 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:26Z","lastTransitionTime":"2026-02-01T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.806785 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.806861 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.806885 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.806916 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.806940 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:26Z","lastTransitionTime":"2026-02-01T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.909270 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.909334 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.909351 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.909377 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:26 crc kubenswrapper[4835]: I0201 07:23:26.909395 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:26Z","lastTransitionTime":"2026-02-01T07:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.012520 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.012568 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.012585 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.012610 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.012627 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:27Z","lastTransitionTime":"2026-02-01T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.114919 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.114984 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.115006 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.115030 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.115048 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:27Z","lastTransitionTime":"2026-02-01T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.218087 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.218141 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.218158 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.218180 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.218199 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:27Z","lastTransitionTime":"2026-02-01T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.320792 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.320850 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.320870 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.320894 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.320911 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:27Z","lastTransitionTime":"2026-02-01T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.424198 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.424270 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.424287 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.424314 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.424331 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:27Z","lastTransitionTime":"2026-02-01T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.528039 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.528102 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.528124 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.528152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.528175 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:27Z","lastTransitionTime":"2026-02-01T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.543919 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 13:17:41.053531345 +0000 UTC Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.566548 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.566588 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.566660 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:27 crc kubenswrapper[4835]: E0201 07:23:27.566770 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:27 crc kubenswrapper[4835]: E0201 07:23:27.566889 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:27 crc kubenswrapper[4835]: E0201 07:23:27.567123 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.592292 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.613469 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f9c91a-7450-4939-9808-dcc21d2eeb96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c45e8c9e136e58b6b6bb296a7160f5e02b57236f1c2fec30df8628b803df0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0882033ebccd13ec096ebe93d0abb367ea7c2b49ee4571850502dc9959be81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3389072313e3af0af04da04d8eb480cbb1611704cb5817a82cc66b8c9d90063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.630744 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.630815 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.630833 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.630855 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.630871 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:27Z","lastTransitionTime":"2026-02-01T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.634690 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.651996 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.682516 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb5fed5-5d65-4f0a-a51a-3109fffc9113\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a4c738f66e1428697d199630cc541f018b1aa36edcb0e3e3ad32ddab2b5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f76a95142c00257f569b0db87094f23435274cbe36740d658bac63c26a55233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64accb3c02d2092922d2534d7c21dd160d0ed2b2ff1cbc19870174f818ba4486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8444f60530510645c3592013a63e5a5b3cdf6872788309d94d5a18fe1553a937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64bfb072019b8c1917e27199bbb7b1491df307cb14257e4cd502f3062a674890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b8ca0d26229f7f9b48abfd0b2c34737b94ba1564e0b9f913d594d2fbdeb13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://084b8ca0d26229f7f9b48abfd0b2c34737b94ba1564e0b9f913d594d2fbdeb13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b3bb2745bd4b232691a2bacf466c147eea6e1068cf4399fd5b46ded7afce49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b3bb2745bd4b232691a2bacf466c147eea6e1068cf4399fd5b46ded7afce49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f420acbcdf8ac32ffbc7f6545be0e96c7e9630fd8285c50cda7cf636deb7769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f420acbcdf8ac32ffbc7f6545be0e96c7e9630fd8285c50cda7cf636deb7769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.699736 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.721975 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.734295 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.734360 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.734381 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.734404 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.734449 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:27Z","lastTransitionTime":"2026-02-01T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.741400 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.772808 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:23:23Z\\\",\\\"message\\\":\\\"7:23:23.631079 6884 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:23:23.631159 6884 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 07:23:23.631243 6884 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 07:23:23.631499 6884 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:23:23.632231 6884 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:23:23.632268 6884 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0201 07:23:23.632293 6884 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0201 07:23:23.632309 6884 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0201 07:23:23.632431 6884 factory.go:656] Stopping watch factory\\\\nI0201 07:23:23.632451 6884 ovnkube.go:599] Stopped ovnkube\\\\nI0201 07:23:23.632677 6884 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0201 07:23:23.632717 6884 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0201 07:23:23.632728 6884 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0201 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:23:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.793045 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f67e3606f318159aa33593125d45284e9277e6418b039476366b909aa6cf27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:23:20Z\\\",\\\"message\\\":\\\"2026-02-01T07:22:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d\\\\n2026-02-01T07:22:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d to /host/opt/cni/bin/\\\\n2026-02-01T07:22:34Z [verbose] multus-daemon started\\\\n2026-02-01T07:22:34Z [verbose] Readiness Indicator file check\\\\n2026-02-01T07:23:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.816505 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.836013 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.837841 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.837911 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.837939 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.837970 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.837992 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:27Z","lastTransitionTime":"2026-02-01T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.851671 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.871325 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.889617 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.906471 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.925050 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.946308 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.946385 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.946459 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.946497 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.946521 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:27Z","lastTransitionTime":"2026-02-01T07:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:27 crc kubenswrapper[4835]: I0201 07:23:27.948072 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:27Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.060881 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.060950 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.060968 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.060992 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.061008 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:28Z","lastTransitionTime":"2026-02-01T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.164383 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.164718 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.164732 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.164748 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.164759 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:28Z","lastTransitionTime":"2026-02-01T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.267663 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.267715 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.267731 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.267753 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.267771 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:28Z","lastTransitionTime":"2026-02-01T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.370561 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.370616 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.370633 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.370655 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.370672 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:28Z","lastTransitionTime":"2026-02-01T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.474116 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.474157 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.474174 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.474195 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.474213 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:28Z","lastTransitionTime":"2026-02-01T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.545094 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 16:21:05.709478972 +0000 UTC Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.566701 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:28 crc kubenswrapper[4835]: E0201 07:23:28.566881 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.577179 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.577233 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.577259 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.577285 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.577306 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:28Z","lastTransitionTime":"2026-02-01T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.679780 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.679836 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.679857 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.679884 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.679906 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:28Z","lastTransitionTime":"2026-02-01T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.781977 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.782152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.782175 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.782196 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.782213 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:28Z","lastTransitionTime":"2026-02-01T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.885196 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.885285 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.885304 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.885332 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.885348 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:28Z","lastTransitionTime":"2026-02-01T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.988242 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.988306 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.988327 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.988354 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:28 crc kubenswrapper[4835]: I0201 07:23:28.988377 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:28Z","lastTransitionTime":"2026-02-01T07:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.091976 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.092045 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.092068 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.092096 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.092122 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:29Z","lastTransitionTime":"2026-02-01T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.195039 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.195109 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.195127 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.195152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.195171 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:29Z","lastTransitionTime":"2026-02-01T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.298056 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.298122 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.298149 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.298179 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.298199 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:29Z","lastTransitionTime":"2026-02-01T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.402005 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.402076 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.402093 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.402118 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.402137 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:29Z","lastTransitionTime":"2026-02-01T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.505584 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.505645 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.505663 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.505690 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.505707 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:29Z","lastTransitionTime":"2026-02-01T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.545538 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 14:31:16.956377936 +0000 UTC Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.565864 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.565933 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.566018 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:29 crc kubenswrapper[4835]: E0201 07:23:29.566200 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:29 crc kubenswrapper[4835]: E0201 07:23:29.566351 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:29 crc kubenswrapper[4835]: E0201 07:23:29.566532 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.609087 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.609179 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.609198 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.609227 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.609249 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:29Z","lastTransitionTime":"2026-02-01T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.711892 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.711942 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.711959 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.711983 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.712003 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:29Z","lastTransitionTime":"2026-02-01T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.814572 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.814648 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.814674 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.814702 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.814722 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:29Z","lastTransitionTime":"2026-02-01T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.918016 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.918079 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.918096 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.918120 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:29 crc kubenswrapper[4835]: I0201 07:23:29.918138 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:29Z","lastTransitionTime":"2026-02-01T07:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.021232 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.021279 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.021295 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.021320 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.021336 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:30Z","lastTransitionTime":"2026-02-01T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.123553 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.123625 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.123642 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.123665 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.123685 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:30Z","lastTransitionTime":"2026-02-01T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.228139 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.228215 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.228239 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.228269 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.228290 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:30Z","lastTransitionTime":"2026-02-01T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.332018 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.332176 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.332206 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.332237 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.332258 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:30Z","lastTransitionTime":"2026-02-01T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.434796 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.434860 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.434882 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.434913 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.434936 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:30Z","lastTransitionTime":"2026-02-01T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.538063 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.538124 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.538141 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.538165 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.538182 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:30Z","lastTransitionTime":"2026-02-01T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.545825 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 14:20:50.927911113 +0000 UTC Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.566237 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:30 crc kubenswrapper[4835]: E0201 07:23:30.566405 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.641916 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.641971 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.641987 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.642009 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.642027 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:30Z","lastTransitionTime":"2026-02-01T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.744490 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.744532 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.744549 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.744570 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.744585 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:30Z","lastTransitionTime":"2026-02-01T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.847756 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.847886 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.847910 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.847932 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.847951 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:30Z","lastTransitionTime":"2026-02-01T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.951860 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.951985 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.952004 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.952030 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:30 crc kubenswrapper[4835]: I0201 07:23:30.952050 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:30Z","lastTransitionTime":"2026-02-01T07:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.054715 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.054812 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.054830 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.054854 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.054875 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:31Z","lastTransitionTime":"2026-02-01T07:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.158949 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.159011 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.159030 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.159054 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.159071 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:31Z","lastTransitionTime":"2026-02-01T07:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.262511 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.262610 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.262627 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.262654 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.262671 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:31Z","lastTransitionTime":"2026-02-01T07:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.365942 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.366014 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.366039 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.366073 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.366095 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:31Z","lastTransitionTime":"2026-02-01T07:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.415843 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.416213 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:35.416173575 +0000 UTC m=+148.536610049 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.469266 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.469321 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.469340 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.469364 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.469387 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:31Z","lastTransitionTime":"2026-02-01T07:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.517220 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.517303 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.517382 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.517396 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.517537 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.517564 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:24:35.517526535 +0000 UTC m=+148.637962999 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.517628 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.517671 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.517695 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.517729 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.517782 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-01 07:24:35.517757411 +0000 UTC m=+148.638193885 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.517795 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.517835 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-01 07:24:35.517804792 +0000 UTC m=+148.638241346 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.517839 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.517870 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.517965 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-01 07:24:35.517935925 +0000 UTC m=+148.638372399 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.546352 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 23:22:42.709787097 +0000 UTC Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.566085 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.566136 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.566288 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.566325 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.566522 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:31 crc kubenswrapper[4835]: E0201 07:23:31.566664 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.573121 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.573174 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.573192 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.573217 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.573233 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:31Z","lastTransitionTime":"2026-02-01T07:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.676285 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.676347 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.676366 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.676390 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.676522 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:31Z","lastTransitionTime":"2026-02-01T07:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.779936 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.779996 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.780017 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.780048 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.780068 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:31Z","lastTransitionTime":"2026-02-01T07:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.882618 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.882676 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.882692 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.882714 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.882731 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:31Z","lastTransitionTime":"2026-02-01T07:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.985939 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.985994 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.986007 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.986024 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:31 crc kubenswrapper[4835]: I0201 07:23:31.986036 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:31Z","lastTransitionTime":"2026-02-01T07:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.089498 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.089563 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.089580 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.089609 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.089626 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:32Z","lastTransitionTime":"2026-02-01T07:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.192204 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.192269 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.192288 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.192314 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.192333 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:32Z","lastTransitionTime":"2026-02-01T07:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.295639 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.295711 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.295729 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.295756 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.295777 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:32Z","lastTransitionTime":"2026-02-01T07:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.398223 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.398288 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.398310 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.398340 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.398361 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:32Z","lastTransitionTime":"2026-02-01T07:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.501761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.501827 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.501847 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.501873 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.501891 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:32Z","lastTransitionTime":"2026-02-01T07:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.547491 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 13:09:23.632497695 +0000 UTC Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.565925 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:32 crc kubenswrapper[4835]: E0201 07:23:32.566080 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.605152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.605208 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.605225 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.605250 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.605267 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:32Z","lastTransitionTime":"2026-02-01T07:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.708276 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.708366 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.708390 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.708488 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.708518 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:32Z","lastTransitionTime":"2026-02-01T07:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.811483 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.811557 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.811579 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.811612 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.811633 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:32Z","lastTransitionTime":"2026-02-01T07:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.914964 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.915378 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.915593 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.915751 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:32 crc kubenswrapper[4835]: I0201 07:23:32.915913 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:32Z","lastTransitionTime":"2026-02-01T07:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.019748 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.019835 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.019858 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.019891 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.019915 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:33Z","lastTransitionTime":"2026-02-01T07:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.122735 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.122797 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.122814 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.122838 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.122855 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:33Z","lastTransitionTime":"2026-02-01T07:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.226212 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.226555 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.226663 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.226785 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.226866 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:33Z","lastTransitionTime":"2026-02-01T07:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.329364 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.329727 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.329819 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.329903 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.329987 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:33Z","lastTransitionTime":"2026-02-01T07:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.433510 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.433575 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.433591 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.433617 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.433636 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:33Z","lastTransitionTime":"2026-02-01T07:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.536942 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.537006 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.537024 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.537051 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.537070 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:33Z","lastTransitionTime":"2026-02-01T07:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.548461 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 16:38:52.227991041 +0000 UTC Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.566160 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:33 crc kubenswrapper[4835]: E0201 07:23:33.566338 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.566157 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:33 crc kubenswrapper[4835]: E0201 07:23:33.566672 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.567112 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:33 crc kubenswrapper[4835]: E0201 07:23:33.567491 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.639764 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.639826 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.639846 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.639870 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.639887 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:33Z","lastTransitionTime":"2026-02-01T07:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.742777 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.742873 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.742892 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.742917 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.742968 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:33Z","lastTransitionTime":"2026-02-01T07:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.845493 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.845584 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.845609 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.845643 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.845669 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:33Z","lastTransitionTime":"2026-02-01T07:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.949537 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.949620 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.949641 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.949666 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:33 crc kubenswrapper[4835]: I0201 07:23:33.949683 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:33Z","lastTransitionTime":"2026-02-01T07:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.053599 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.053689 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.053713 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.053742 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.053769 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:34Z","lastTransitionTime":"2026-02-01T07:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.156642 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.156711 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.156728 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.156754 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.156772 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:34Z","lastTransitionTime":"2026-02-01T07:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.259714 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.259780 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.259797 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.259825 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.259844 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:34Z","lastTransitionTime":"2026-02-01T07:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.362116 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.362182 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.362199 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.362225 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.362242 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:34Z","lastTransitionTime":"2026-02-01T07:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.465318 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.465382 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.465397 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.465451 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.465473 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:34Z","lastTransitionTime":"2026-02-01T07:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.549204 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 01:08:41.344111267 +0000 UTC Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.566748 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:34 crc kubenswrapper[4835]: E0201 07:23:34.567076 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.569214 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.569279 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.569297 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.569319 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.569334 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:34Z","lastTransitionTime":"2026-02-01T07:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.585725 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.673444 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.673503 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.673521 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.673543 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.673565 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:34Z","lastTransitionTime":"2026-02-01T07:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.776004 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.776072 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.776090 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.776114 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.776131 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:34Z","lastTransitionTime":"2026-02-01T07:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.879404 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.879525 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.879548 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.879579 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.879600 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:34Z","lastTransitionTime":"2026-02-01T07:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.982360 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.982447 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.982467 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.982490 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:34 crc kubenswrapper[4835]: I0201 07:23:34.982509 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:34Z","lastTransitionTime":"2026-02-01T07:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.085907 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.085968 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.085985 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.086009 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.086030 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.091085 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.091131 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.091148 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.091168 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.091189 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: E0201 07:23:35.113652 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.118461 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.118515 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.118535 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.118570 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.118605 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: E0201 07:23:35.138123 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.149504 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.149571 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.149588 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.149613 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.149631 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: E0201 07:23:35.170543 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.175401 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.175699 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.175904 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.176136 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.176355 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: E0201 07:23:35.196289 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.201113 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.201171 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.201192 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.201224 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.201245 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: E0201 07:23:35.221027 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9d6ec0e7-f211-4b58-9cdd-b032c4656a66\\\",\\\"systemUUID\\\":\\\"83c36967-9ad2-4029-85f1-c31be3b4de3a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:35Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:35 crc kubenswrapper[4835]: E0201 07:23:35.221244 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.224080 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.224152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.224176 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.224204 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.224224 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.327524 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.327596 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.327614 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.327640 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.327657 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.430840 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.430895 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.430913 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.430936 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.430952 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.534138 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.534199 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.534221 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.534273 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.534296 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.549739 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 01:46:48.93853507 +0000 UTC Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.566453 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.566553 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.566552 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:35 crc kubenswrapper[4835]: E0201 07:23:35.566670 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:35 crc kubenswrapper[4835]: E0201 07:23:35.566764 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:35 crc kubenswrapper[4835]: E0201 07:23:35.566947 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.637087 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.637143 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.637160 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.637182 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.637201 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.739315 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.739378 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.739398 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.739468 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.739498 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.842001 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.842073 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.842096 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.842125 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.842147 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.945622 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.945696 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.945866 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.945910 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:35 crc kubenswrapper[4835]: I0201 07:23:35.946003 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:35Z","lastTransitionTime":"2026-02-01T07:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.048749 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.048811 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.048827 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.048853 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.048869 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:36Z","lastTransitionTime":"2026-02-01T07:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.152811 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.152891 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.152911 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.152948 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.152974 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:36Z","lastTransitionTime":"2026-02-01T07:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.256241 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.256296 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.256313 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.256337 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.256354 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:36Z","lastTransitionTime":"2026-02-01T07:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.359813 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.359964 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.359984 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.360009 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.360026 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:36Z","lastTransitionTime":"2026-02-01T07:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.463135 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.463190 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.463204 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.463225 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.463241 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:36Z","lastTransitionTime":"2026-02-01T07:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.550686 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 04:09:53.669500345 +0000 UTC Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.565921 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.566035 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.566073 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.566083 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.566098 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.566107 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:36Z","lastTransitionTime":"2026-02-01T07:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:36 crc kubenswrapper[4835]: E0201 07:23:36.566342 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.566591 4835 scope.go:117] "RemoveContainer" containerID="9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe" Feb 01 07:23:36 crc kubenswrapper[4835]: E0201 07:23:36.566752 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.669394 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.669871 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.670032 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.670185 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.670326 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:36Z","lastTransitionTime":"2026-02-01T07:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.774543 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.774633 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.774658 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.774689 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.774708 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:36Z","lastTransitionTime":"2026-02-01T07:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.878070 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.878189 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.878210 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.878235 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.878255 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:36Z","lastTransitionTime":"2026-02-01T07:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.981400 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.981498 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.981526 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.981555 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:36 crc kubenswrapper[4835]: I0201 07:23:36.981575 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:36Z","lastTransitionTime":"2026-02-01T07:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.085107 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.085203 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.085221 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.085246 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.085265 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:37Z","lastTransitionTime":"2026-02-01T07:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.190485 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.190559 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.190579 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.190605 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.190622 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:37Z","lastTransitionTime":"2026-02-01T07:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.294206 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.294271 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.294288 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.294312 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.294330 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:37Z","lastTransitionTime":"2026-02-01T07:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.398143 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.398219 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.398237 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.398266 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.398284 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:37Z","lastTransitionTime":"2026-02-01T07:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.501707 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.501769 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.501789 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.501814 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.501831 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:37Z","lastTransitionTime":"2026-02-01T07:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.551404 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:18:53.489807105 +0000 UTC Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.566325 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.566340 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.566358 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:37 crc kubenswrapper[4835]: E0201 07:23:37.566775 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:37 crc kubenswrapper[4835]: E0201 07:23:37.566582 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:37 crc kubenswrapper[4835]: E0201 07:23:37.566933 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.592557 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5e242fa066f132e3738bcd4668b7a98a105e2c960b7335bbbaa2385796e639c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.606193 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.606285 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.606307 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.606337 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.606356 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:37Z","lastTransitionTime":"2026-02-01T07:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.615091 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.636853 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-25s9j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7f67e3606f318159aa33593125d45284e9277e6418b039476366b909aa6cf27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:23:20Z\\\",\\\"message\\\":\\\"2026-02-01T07:22:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d\\\\n2026-02-01T07:22:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_954fc416-b4bd-4d54-ab64-c9a1b559c13d to /host/opt/cni/bin/\\\\n2026-02-01T07:22:34Z [verbose] multus-daemon started\\\\n2026-02-01T07:22:34Z [verbose] Readiness Indicator file check\\\\n2026-02-01T07:23:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:23:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qwv4d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-25s9j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.658275 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cf5926-f943-44c0-a351-db83ab17c2a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9779ac059e53d002d03434f310aabd56a08d4cd4c41279d65f3f668e52a1880d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ed22723cd8c7f977df1b8d05d4307e1f52dc59408905b35ef9bd888c96521e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://747a4b10395e64ab21591f4191a558bf2ce2fa1bda30c75ccc7f64a0c4d2a585\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ca1e6f1a2f290331abc491af638b5a4f956d2117fa6eb4e880b68d181d6a789f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7673a294b076fce68d39cf73ef5c9db9fce24901065d46dfa9bd918ac050d3e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1259866b010ef1a6f22bf26fd73c6c94a901d9be98c96143a0b8016ded0e7341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8912b2fe5128b84a25f0353737e778828e95bb24d1bab7039169bd6db3e22f85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qtzjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.666884 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-2msm5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tthdk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-2msm5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.678613 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"231d6283-d1e7-48ac-a4e6-0a0f8ac643d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ca0104b140843565d190249e50eeae1763756bf2cc79f052af468172322fdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://708176c5095d54a9f702a4b4de2f151840d5ca810b40315e7e6fa0b5b64c43b1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9891ab2f33678a5e5f577d1575353419f02c939d60163add991e011f848f3b8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.690120 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.705671 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b25d37344c02cbf47c7ea9dbd07f1b8b67f533db00dc16c5be7f459140f63de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.709637 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.709685 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.709701 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.709717 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.709729 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:37Z","lastTransitionTime":"2026-02-01T07:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.719062 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d8kfl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6d0e64-7406-4a2b-8006-8381549b35e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3a37221520a479574906cccebcda0aa32eed2c0269222e9449c699d15f746fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6tp8v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d8kfl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.734427 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8ad734e93345a0025482ef9134540c927afc72979404a31fef686b0d083a292\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6e44cc2e77d5d93f78aba50b279adfcf682339d519473d47b0223276d4843e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.748342 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"303c450e-4b2d-4908-84e6-df8b444ed640\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cdaaf31b229e5248ba911e55f58786b736479735b93de741dde8fe6edb6ade7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jpvhf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wdt78\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.762634 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bcb829b-af6e-4f40-b31d-9abcf38c53e8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44751d1619bcacbde4be80603e618132541e8aea35b1bea6e6d8805ac2a35c35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://857b570e7ae7dd450284342c471cf02691b7fa7eb5bd24ad05e6dd0115d1ff2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://857b570e7ae7dd450284342c471cf02691b7fa7eb5bd24ad05e6dd0115d1ff2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.787915 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"beb5fed5-5d65-4f0a-a51a-3109fffc9113\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a4c738f66e1428697d199630cc541f018b1aa36edcb0e3e3ad32ddab2b5586\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f76a95142c00257f569b0db87094f23435274cbe36740d658bac63c26a55233\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64accb3c02d2092922d2534d7c21dd160d0ed2b2ff1cbc19870174f818ba4486\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8444f60530510645c3592013a63e5a5b3cdf6872788309d94d5a18fe1553a937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64bfb072019b8c1917e27199bbb7b1491df307cb14257e4cd502f3062a674890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://084b8ca0d26229f7f9b48abfd0b2c34737b94ba1564e0b9f913d594d2fbdeb13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://084b8ca0d26229f7f9b48abfd0b2c34737b94ba1564e0b9f913d594d2fbdeb13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37b3bb2745bd4b232691a2bacf466c147eea6e1068cf4399fd5b46ded7afce49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37b3bb2745bd4b232691a2bacf466c147eea6e1068cf4399fd5b46ded7afce49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4f420acbcdf8ac32ffbc7f6545be0e96c7e9630fd8285c50cda7cf636deb7769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f420acbcdf8ac32ffbc7f6545be0e96c7e9630fd8285c50cda7cf636deb7769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.807836 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-01T07:22:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0201 07:22:21.223280 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0201 07:22:21.226237 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1537225004/tls.crt::/tmp/serving-cert-1537225004/tls.key\\\\\\\"\\\\nI0201 07:22:26.693809 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0201 07:22:26.697830 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0201 07:22:26.697874 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0201 07:22:26.697915 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0201 07:22:26.697925 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0201 07:22:26.708678 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0201 07:22:26.708717 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708727 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0201 07:22:26.708736 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0201 07:22:26.708742 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0201 07:22:26.708751 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0201 07:22:26.708757 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0201 07:22:26.708752 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0201 07:22:26.712186 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.817213 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.817271 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.817284 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.817307 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.817330 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:37Z","lastTransitionTime":"2026-02-01T07:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.826837 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94f9c91a-7450-4939-9808-dcc21d2eeb96\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4c45e8c9e136e58b6b6bb296a7160f5e02b57236f1c2fec30df8628b803df0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0882033ebccd13ec096ebe93d0abb367ea7c2b49ee4571850502dc9959be81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3389072313e3af0af04da04d8eb480cbb1611704cb5817a82cc66b8c9d90063\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abed9fbffe988ce9f2525f54009984e2ed1ec4aeb0a02b40b4daa103ec009253\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:08Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.846487 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.870620 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd62f19b-07ab-4cc5-84a3-2f097c278de7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-01T07:23:23Z\\\",\\\"message\\\":\\\"7:23:23.631079 6884 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0201 07:23:23.631159 6884 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 07:23:23.631243 6884 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0201 07:23:23.631499 6884 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0201 07:23:23.632231 6884 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0201 07:23:23.632268 6884 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0201 07:23:23.632293 6884 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0201 07:23:23.632309 6884 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0201 07:23:23.632431 6884 factory.go:656] Stopping watch factory\\\\nI0201 07:23:23.632451 6884 ovnkube.go:599] Stopped ovnkube\\\\nI0201 07:23:23.632677 6884 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0201 07:23:23.632717 6884 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0201 07:23:23.632728 6884 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0201 07\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-01T07:23:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:22:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x78ft\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5z5dl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.886171 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-l7rwg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96856bc5-b4b0-4268-8868-65a584408ca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1294d6e146105b22a8d8995414288b2afa6f282f221df94c772751cc73b240ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d2t5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-l7rwg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.901213 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97c5a8c8-51ec-4c9b-9334-1c059fce5ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-01T07:22:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea1c651dd5c3d3849e4734616a3c45f14566cf46dc599834acf21c838add32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9894d6194f3edc561bb87c41531068bb725b2be09749ce0561010a2462e4c974\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:22:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k6kkn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-01T07:22:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7r4zf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-01T07:23:37Z is after 2025-08-24T17:21:41Z" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.921542 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.921624 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.921722 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.921786 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:37 crc kubenswrapper[4835]: I0201 07:23:37.921843 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:37Z","lastTransitionTime":"2026-02-01T07:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.024352 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.024830 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.024848 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.024874 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.024896 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:38Z","lastTransitionTime":"2026-02-01T07:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.128377 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.128477 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.128497 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.128524 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.128547 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:38Z","lastTransitionTime":"2026-02-01T07:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.231866 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.231933 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.231951 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.231977 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.231996 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:38Z","lastTransitionTime":"2026-02-01T07:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.335803 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.335860 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.335880 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.335905 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.335924 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:38Z","lastTransitionTime":"2026-02-01T07:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.439559 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.439606 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.439623 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.439645 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.439662 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:38Z","lastTransitionTime":"2026-02-01T07:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.544625 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.544688 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.544704 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.544730 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.544747 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:38Z","lastTransitionTime":"2026-02-01T07:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.552297 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 14:21:02.965682753 +0000 UTC Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.565880 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:38 crc kubenswrapper[4835]: E0201 07:23:38.566046 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.648007 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.648067 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.648085 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.648109 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.648126 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:38Z","lastTransitionTime":"2026-02-01T07:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.752460 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.752527 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.752545 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.752570 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.752591 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:38Z","lastTransitionTime":"2026-02-01T07:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.855940 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.856002 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.856022 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.856048 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.856067 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:38Z","lastTransitionTime":"2026-02-01T07:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.959772 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.959834 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.959850 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.959873 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:38 crc kubenswrapper[4835]: I0201 07:23:38.959890 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:38Z","lastTransitionTime":"2026-02-01T07:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.062469 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.062539 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.062557 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.062583 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.062601 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:39Z","lastTransitionTime":"2026-02-01T07:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.166153 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.166215 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.166233 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.166257 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.166277 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:39Z","lastTransitionTime":"2026-02-01T07:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.269672 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.269756 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.270640 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.270724 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.270749 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:39Z","lastTransitionTime":"2026-02-01T07:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.373526 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.373569 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.373579 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.373593 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.373602 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:39Z","lastTransitionTime":"2026-02-01T07:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.476773 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.476837 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.476854 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.476877 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.476894 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:39Z","lastTransitionTime":"2026-02-01T07:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.553017 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 11:46:53.234118994 +0000 UTC Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.565914 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:39 crc kubenswrapper[4835]: E0201 07:23:39.566306 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.566028 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.565947 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:39 crc kubenswrapper[4835]: E0201 07:23:39.566868 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:39 crc kubenswrapper[4835]: E0201 07:23:39.567093 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.579688 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.579764 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.579787 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.579821 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.579840 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:39Z","lastTransitionTime":"2026-02-01T07:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.683885 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.683951 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.683974 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.684006 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.684027 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:39Z","lastTransitionTime":"2026-02-01T07:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.787903 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.788174 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.788254 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.788337 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.788401 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:39Z","lastTransitionTime":"2026-02-01T07:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.891288 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.891344 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.891364 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.891389 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.891414 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:39Z","lastTransitionTime":"2026-02-01T07:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.994565 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.994949 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.995082 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.995211 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:39 crc kubenswrapper[4835]: I0201 07:23:39.995346 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:39Z","lastTransitionTime":"2026-02-01T07:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.099057 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.099112 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.099123 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.099141 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.099156 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:40Z","lastTransitionTime":"2026-02-01T07:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.202991 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.203057 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.203070 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.203091 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.203105 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:40Z","lastTransitionTime":"2026-02-01T07:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.306148 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.306209 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.306229 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.306253 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.306270 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:40Z","lastTransitionTime":"2026-02-01T07:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.409052 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.409138 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.409151 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.409169 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.409183 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:40Z","lastTransitionTime":"2026-02-01T07:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.511716 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.511802 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.511823 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.511847 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.511864 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:40Z","lastTransitionTime":"2026-02-01T07:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.553993 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:02:28.213253686 +0000 UTC Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.566630 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:40 crc kubenswrapper[4835]: E0201 07:23:40.566807 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.614618 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.614680 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.614699 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.614727 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.614744 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:40Z","lastTransitionTime":"2026-02-01T07:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.717802 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.717862 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.717880 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.717912 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.717937 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:40Z","lastTransitionTime":"2026-02-01T07:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.821245 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.821308 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.821324 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.821354 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.821371 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:40Z","lastTransitionTime":"2026-02-01T07:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.924543 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.924602 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.924647 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.924670 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:40 crc kubenswrapper[4835]: I0201 07:23:40.924693 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:40Z","lastTransitionTime":"2026-02-01T07:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.027213 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.027277 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.027295 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.027321 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.027339 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:41Z","lastTransitionTime":"2026-02-01T07:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.130607 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.130728 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.130749 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.130774 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.130791 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:41Z","lastTransitionTime":"2026-02-01T07:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.234327 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.234392 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.234415 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.234470 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.234489 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:41Z","lastTransitionTime":"2026-02-01T07:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.338380 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.338502 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.338530 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.338559 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.338581 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:41Z","lastTransitionTime":"2026-02-01T07:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.442125 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.442261 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.442279 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.442304 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.442321 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:41Z","lastTransitionTime":"2026-02-01T07:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.545757 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.545865 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.545883 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.545909 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.545926 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:41Z","lastTransitionTime":"2026-02-01T07:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.555262 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 22:17:51.025772592 +0000 UTC Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.565817 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.565983 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:41 crc kubenswrapper[4835]: E0201 07:23:41.566184 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.566300 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:41 crc kubenswrapper[4835]: E0201 07:23:41.566596 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:41 crc kubenswrapper[4835]: E0201 07:23:41.566800 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.650404 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.650523 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.650550 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.650575 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.650593 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:41Z","lastTransitionTime":"2026-02-01T07:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.754103 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.754171 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.754195 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.754248 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.754274 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:41Z","lastTransitionTime":"2026-02-01T07:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.857049 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.857189 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.857210 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.857232 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.857251 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:41Z","lastTransitionTime":"2026-02-01T07:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.960511 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.960559 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.960569 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.960587 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:41 crc kubenswrapper[4835]: I0201 07:23:41.960600 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:41Z","lastTransitionTime":"2026-02-01T07:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.064168 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.064242 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.064262 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.064291 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.064310 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:42Z","lastTransitionTime":"2026-02-01T07:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.166954 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.167013 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.167027 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.167045 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.167057 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:42Z","lastTransitionTime":"2026-02-01T07:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.270253 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.270308 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.270325 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.270349 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.270366 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:42Z","lastTransitionTime":"2026-02-01T07:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.373116 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.373155 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.373165 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.373182 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.373192 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:42Z","lastTransitionTime":"2026-02-01T07:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.476555 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.476656 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.476675 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.476709 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.476904 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:42Z","lastTransitionTime":"2026-02-01T07:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.556379 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 04:37:18.454397013 +0000 UTC Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.566261 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:42 crc kubenswrapper[4835]: E0201 07:23:42.566394 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.579219 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.579296 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.579321 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.579351 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.579371 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:42Z","lastTransitionTime":"2026-02-01T07:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.683219 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.683282 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.683300 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.683325 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.683343 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:42Z","lastTransitionTime":"2026-02-01T07:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.787278 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.787586 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.787611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.787649 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.787672 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:42Z","lastTransitionTime":"2026-02-01T07:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.891466 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.891541 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.891589 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.891620 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.891641 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:42Z","lastTransitionTime":"2026-02-01T07:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.995163 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.995223 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.995238 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.995260 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:42 crc kubenswrapper[4835]: I0201 07:23:42.995273 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:42Z","lastTransitionTime":"2026-02-01T07:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.098976 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.099064 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.099083 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.099116 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.099137 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:43Z","lastTransitionTime":"2026-02-01T07:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.202469 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.202544 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.202559 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.202580 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.202593 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:43Z","lastTransitionTime":"2026-02-01T07:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.305294 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.305404 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.305485 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.305519 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.305540 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:43Z","lastTransitionTime":"2026-02-01T07:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.409455 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.409554 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.409574 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.409607 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.409629 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:43Z","lastTransitionTime":"2026-02-01T07:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.513075 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.513150 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.513167 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.513191 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.513208 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:43Z","lastTransitionTime":"2026-02-01T07:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.557596 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 03:02:01.745699542 +0000 UTC Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.565982 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.566088 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:43 crc kubenswrapper[4835]: E0201 07:23:43.566190 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.566100 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:43 crc kubenswrapper[4835]: E0201 07:23:43.566348 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:43 crc kubenswrapper[4835]: E0201 07:23:43.566533 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.615938 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.616002 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.616019 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.616045 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.616065 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:43Z","lastTransitionTime":"2026-02-01T07:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.719542 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.719605 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.719623 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.719650 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.719669 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:43Z","lastTransitionTime":"2026-02-01T07:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.822805 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.822880 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.822903 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.822935 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.822957 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:43Z","lastTransitionTime":"2026-02-01T07:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.925950 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.926042 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.926067 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.926101 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:43 crc kubenswrapper[4835]: I0201 07:23:43.926124 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:43Z","lastTransitionTime":"2026-02-01T07:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.029721 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.029778 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.029791 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.029810 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.029826 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:44Z","lastTransitionTime":"2026-02-01T07:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.133226 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.133306 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.133325 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.133355 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.133378 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:44Z","lastTransitionTime":"2026-02-01T07:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.236363 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.236491 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.236511 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.236537 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.236556 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:44Z","lastTransitionTime":"2026-02-01T07:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.339431 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.339509 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.339527 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.339553 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.339569 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:44Z","lastTransitionTime":"2026-02-01T07:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.442392 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.442486 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.442505 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.442530 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.442546 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:44Z","lastTransitionTime":"2026-02-01T07:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.545050 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.545122 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.545140 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.545167 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.545200 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:44Z","lastTransitionTime":"2026-02-01T07:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.558485 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 14:06:59.155748883 +0000 UTC Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.565932 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:44 crc kubenswrapper[4835]: E0201 07:23:44.566094 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.648235 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.648289 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.648303 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.648321 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.648333 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:44Z","lastTransitionTime":"2026-02-01T07:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.750605 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.750656 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.750666 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.750686 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.750697 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:44Z","lastTransitionTime":"2026-02-01T07:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.854142 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.854195 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.854223 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.854244 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.854256 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:44Z","lastTransitionTime":"2026-02-01T07:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.957226 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.957261 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.957269 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.957282 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:44 crc kubenswrapper[4835]: I0201 07:23:44.957290 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:44Z","lastTransitionTime":"2026-02-01T07:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.060837 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.060887 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.060902 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.060919 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.060932 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:45Z","lastTransitionTime":"2026-02-01T07:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.163718 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.163785 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.163808 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.163838 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.163859 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:45Z","lastTransitionTime":"2026-02-01T07:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.265999 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.266067 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.266088 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.266117 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.266140 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:45Z","lastTransitionTime":"2026-02-01T07:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.369677 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.369737 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.369759 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.369789 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.369812 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:45Z","lastTransitionTime":"2026-02-01T07:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.472666 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.472754 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.472775 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.472803 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.472827 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:45Z","lastTransitionTime":"2026-02-01T07:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.559135 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 18:36:29.856981874 +0000 UTC Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.566861 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:45 crc kubenswrapper[4835]: E0201 07:23:45.567023 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.566880 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.567116 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:45 crc kubenswrapper[4835]: E0201 07:23:45.567313 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:45 crc kubenswrapper[4835]: E0201 07:23:45.567360 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.575059 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.575120 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.575139 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.575164 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.575183 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:45Z","lastTransitionTime":"2026-02-01T07:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.619603 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.619653 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.619668 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.619690 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.619703 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-01T07:23:45Z","lastTransitionTime":"2026-02-01T07:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.690248 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg"] Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.690821 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.694515 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.694617 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.694648 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.695013 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.715107 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=11.715079168 podStartE2EDuration="11.715079168s" podCreationTimestamp="2026-02-01 07:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:23:45.715029167 +0000 UTC m=+98.835465641" watchObservedRunningTime="2026-02-01 07:23:45.715079168 +0000 UTC m=+98.835515642" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.768821 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=19.768795165 podStartE2EDuration="19.768795165s" podCreationTimestamp="2026-02-01 07:23:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:23:45.750768383 +0000 UTC m=+98.871204817" watchObservedRunningTime="2026-02-01 07:23:45.768795165 +0000 UTC m=+98.889231639" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.769116 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=78.769107682 podStartE2EDuration="1m18.769107682s" podCreationTimestamp="2026-02-01 07:22:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:23:45.768660221 +0000 UTC m=+98.889096685" watchObservedRunningTime="2026-02-01 07:23:45.769107682 +0000 UTC m=+98.889544156" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.795809 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=42.795783301 podStartE2EDuration="42.795783301s" podCreationTimestamp="2026-02-01 07:23:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:23:45.795495714 +0000 UTC m=+98.915932188" watchObservedRunningTime="2026-02-01 07:23:45.795783301 +0000 UTC m=+98.916219765" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.800016 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84a06568-5100-4aac-b537-c6ed932d9398-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.800095 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/84a06568-5100-4aac-b537-c6ed932d9398-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.800142 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84a06568-5100-4aac-b537-c6ed932d9398-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.800226 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/84a06568-5100-4aac-b537-c6ed932d9398-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.800259 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/84a06568-5100-4aac-b537-c6ed932d9398-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.849007 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podStartSLOduration=73.848970424 podStartE2EDuration="1m13.848970424s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:23:45.832003858 +0000 UTC m=+98.952440302" watchObservedRunningTime="2026-02-01 07:23:45.848970424 +0000 UTC m=+98.969406898" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.900788 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84a06568-5100-4aac-b537-c6ed932d9398-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.900843 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/84a06568-5100-4aac-b537-c6ed932d9398-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.900875 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84a06568-5100-4aac-b537-c6ed932d9398-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.900937 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/84a06568-5100-4aac-b537-c6ed932d9398-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.900959 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/84a06568-5100-4aac-b537-c6ed932d9398-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.900991 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/84a06568-5100-4aac-b537-c6ed932d9398-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.901084 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/84a06568-5100-4aac-b537-c6ed932d9398-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.901897 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/84a06568-5100-4aac-b537-c6ed932d9398-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.911381 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84a06568-5100-4aac-b537-c6ed932d9398-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.926029 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84a06568-5100-4aac-b537-c6ed932d9398-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wlfjg\" (UID: \"84a06568-5100-4aac-b537-c6ed932d9398\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.939310 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7r4zf" podStartSLOduration=72.939278017 podStartE2EDuration="1m12.939278017s" podCreationTimestamp="2026-02-01 07:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:23:45.937060821 +0000 UTC m=+99.057497265" watchObservedRunningTime="2026-02-01 07:23:45.939278017 +0000 UTC m=+99.059714491" Feb 01 07:23:45 crc kubenswrapper[4835]: I0201 07:23:45.939996 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-l7rwg" podStartSLOduration=73.939981714 podStartE2EDuration="1m13.939981714s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:23:45.917622804 +0000 UTC m=+99.038059248" watchObservedRunningTime="2026-02-01 07:23:45.939981714 +0000 UTC m=+99.060418218" Feb 01 07:23:46 crc kubenswrapper[4835]: I0201 07:23:46.013367 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" Feb 01 07:23:46 crc kubenswrapper[4835]: I0201 07:23:46.015482 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-25s9j" podStartSLOduration=74.015453036 podStartE2EDuration="1m14.015453036s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:23:45.987200708 +0000 UTC m=+99.107637182" watchObservedRunningTime="2026-02-01 07:23:46.015453036 +0000 UTC m=+99.135889510" Feb 01 07:23:46 crc kubenswrapper[4835]: I0201 07:23:46.038930 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-qtzjl" podStartSLOduration=74.038905443 podStartE2EDuration="1m14.038905443s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:23:46.017092177 +0000 UTC m=+99.137528631" watchObservedRunningTime="2026-02-01 07:23:46.038905443 +0000 UTC m=+99.159341887" Feb 01 07:23:46 crc kubenswrapper[4835]: I0201 07:23:46.039335 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=77.039328734 podStartE2EDuration="1m17.039328734s" podCreationTimestamp="2026-02-01 07:22:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:23:46.038201646 +0000 UTC m=+99.158638110" watchObservedRunningTime="2026-02-01 07:23:46.039328734 +0000 UTC m=+99.159765188" Feb 01 07:23:46 crc kubenswrapper[4835]: W0201 07:23:46.043557 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84a06568_5100_4aac_b537_c6ed932d9398.slice/crio-8cb8363ed955232f2c7f1971d85315af8c1054d141fda8c44a08506885e00182 WatchSource:0}: Error finding container 8cb8363ed955232f2c7f1971d85315af8c1054d141fda8c44a08506885e00182: Status 404 returned error can't find the container with id 8cb8363ed955232f2c7f1971d85315af8c1054d141fda8c44a08506885e00182 Feb 01 07:23:46 crc kubenswrapper[4835]: I0201 07:23:46.096279 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-d8kfl" podStartSLOduration=74.09624048 podStartE2EDuration="1m14.09624048s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:23:46.095256885 +0000 UTC m=+99.215693369" watchObservedRunningTime="2026-02-01 07:23:46.09624048 +0000 UTC m=+99.216676924" Feb 01 07:23:46 crc kubenswrapper[4835]: I0201 07:23:46.145710 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" event={"ID":"84a06568-5100-4aac-b537-c6ed932d9398","Type":"ContainerStarted","Data":"8cb8363ed955232f2c7f1971d85315af8c1054d141fda8c44a08506885e00182"} Feb 01 07:23:46 crc kubenswrapper[4835]: I0201 07:23:46.559683 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 20:59:17.407771615 +0000 UTC Feb 01 07:23:46 crc kubenswrapper[4835]: I0201 07:23:46.559767 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 01 07:23:46 crc kubenswrapper[4835]: I0201 07:23:46.565878 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:46 crc kubenswrapper[4835]: E0201 07:23:46.566143 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:46 crc kubenswrapper[4835]: I0201 07:23:46.570221 4835 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 01 07:23:47 crc kubenswrapper[4835]: I0201 07:23:47.151842 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" event={"ID":"84a06568-5100-4aac-b537-c6ed932d9398","Type":"ContainerStarted","Data":"af1169e57b8eabf748a9eb4a93e85dc64ac61ebfa16dd47206d4e5528bb046f9"} Feb 01 07:23:47 crc kubenswrapper[4835]: I0201 07:23:47.175725 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wlfjg" podStartSLOduration=75.17569643 podStartE2EDuration="1m15.17569643s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:23:47.173575757 +0000 UTC m=+100.294012231" watchObservedRunningTime="2026-02-01 07:23:47.17569643 +0000 UTC m=+100.296132904" Feb 01 07:23:47 crc kubenswrapper[4835]: I0201 07:23:47.566117 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:47 crc kubenswrapper[4835]: I0201 07:23:47.566237 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:47 crc kubenswrapper[4835]: I0201 07:23:47.566317 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:47 crc kubenswrapper[4835]: E0201 07:23:47.567671 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:47 crc kubenswrapper[4835]: E0201 07:23:47.567817 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:47 crc kubenswrapper[4835]: E0201 07:23:47.568024 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:48 crc kubenswrapper[4835]: I0201 07:23:48.566742 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:48 crc kubenswrapper[4835]: E0201 07:23:48.567341 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:48 crc kubenswrapper[4835]: I0201 07:23:48.567766 4835 scope.go:117] "RemoveContainer" containerID="9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe" Feb 01 07:23:48 crc kubenswrapper[4835]: E0201 07:23:48.568034 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" Feb 01 07:23:49 crc kubenswrapper[4835]: I0201 07:23:49.566501 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:49 crc kubenswrapper[4835]: I0201 07:23:49.566571 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:49 crc kubenswrapper[4835]: I0201 07:23:49.566631 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:49 crc kubenswrapper[4835]: E0201 07:23:49.566773 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:49 crc kubenswrapper[4835]: E0201 07:23:49.566999 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:49 crc kubenswrapper[4835]: E0201 07:23:49.567121 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:50 crc kubenswrapper[4835]: I0201 07:23:50.566672 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:50 crc kubenswrapper[4835]: E0201 07:23:50.566823 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:51 crc kubenswrapper[4835]: I0201 07:23:51.565927 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:51 crc kubenswrapper[4835]: I0201 07:23:51.565997 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:51 crc kubenswrapper[4835]: E0201 07:23:51.566183 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:51 crc kubenswrapper[4835]: I0201 07:23:51.566442 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:51 crc kubenswrapper[4835]: E0201 07:23:51.566585 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:51 crc kubenswrapper[4835]: E0201 07:23:51.566791 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:51 crc kubenswrapper[4835]: I0201 07:23:51.673514 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:51 crc kubenswrapper[4835]: E0201 07:23:51.673777 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:23:51 crc kubenswrapper[4835]: E0201 07:23:51.673888 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs podName:caf346fd-1c47-4f35-a5e6-79f7ac8fcafe nodeName:}" failed. No retries permitted until 2026-02-01 07:24:55.673859179 +0000 UTC m=+168.794295643 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs") pod "network-metrics-daemon-2msm5" (UID: "caf346fd-1c47-4f35-a5e6-79f7ac8fcafe") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 01 07:23:52 crc kubenswrapper[4835]: I0201 07:23:52.566633 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:52 crc kubenswrapper[4835]: E0201 07:23:52.566764 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:53 crc kubenswrapper[4835]: I0201 07:23:53.566734 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:53 crc kubenswrapper[4835]: I0201 07:23:53.566783 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:53 crc kubenswrapper[4835]: E0201 07:23:53.566929 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:53 crc kubenswrapper[4835]: I0201 07:23:53.567240 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:53 crc kubenswrapper[4835]: E0201 07:23:53.567340 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:53 crc kubenswrapper[4835]: E0201 07:23:53.567584 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:54 crc kubenswrapper[4835]: I0201 07:23:54.566273 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:54 crc kubenswrapper[4835]: E0201 07:23:54.566501 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:55 crc kubenswrapper[4835]: I0201 07:23:55.566120 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:55 crc kubenswrapper[4835]: I0201 07:23:55.566204 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:55 crc kubenswrapper[4835]: I0201 07:23:55.566207 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:55 crc kubenswrapper[4835]: E0201 07:23:55.566287 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:55 crc kubenswrapper[4835]: E0201 07:23:55.566460 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:55 crc kubenswrapper[4835]: E0201 07:23:55.566609 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:56 crc kubenswrapper[4835]: I0201 07:23:56.566297 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:56 crc kubenswrapper[4835]: E0201 07:23:56.566916 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:57 crc kubenswrapper[4835]: I0201 07:23:57.565929 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:57 crc kubenswrapper[4835]: I0201 07:23:57.566032 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:57 crc kubenswrapper[4835]: I0201 07:23:57.566168 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:57 crc kubenswrapper[4835]: E0201 07:23:57.567800 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:57 crc kubenswrapper[4835]: E0201 07:23:57.568028 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:57 crc kubenswrapper[4835]: E0201 07:23:57.568124 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:23:58 crc kubenswrapper[4835]: I0201 07:23:58.566494 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:23:58 crc kubenswrapper[4835]: E0201 07:23:58.566676 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:23:59 crc kubenswrapper[4835]: I0201 07:23:59.566639 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:23:59 crc kubenswrapper[4835]: I0201 07:23:59.566721 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:23:59 crc kubenswrapper[4835]: I0201 07:23:59.566795 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:23:59 crc kubenswrapper[4835]: E0201 07:23:59.566830 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:23:59 crc kubenswrapper[4835]: E0201 07:23:59.566906 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:23:59 crc kubenswrapper[4835]: E0201 07:23:59.567070 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:24:00 crc kubenswrapper[4835]: I0201 07:24:00.566688 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:00 crc kubenswrapper[4835]: E0201 07:24:00.567377 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:24:00 crc kubenswrapper[4835]: I0201 07:24:00.568189 4835 scope.go:117] "RemoveContainer" containerID="9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe" Feb 01 07:24:00 crc kubenswrapper[4835]: E0201 07:24:00.568740 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5z5dl_openshift-ovn-kubernetes(bd62f19b-07ab-4cc5-84a3-2f097c278de7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" Feb 01 07:24:01 crc kubenswrapper[4835]: I0201 07:24:01.566065 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:01 crc kubenswrapper[4835]: I0201 07:24:01.566214 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:01 crc kubenswrapper[4835]: E0201 07:24:01.566234 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:24:01 crc kubenswrapper[4835]: I0201 07:24:01.566301 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:01 crc kubenswrapper[4835]: E0201 07:24:01.566396 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:24:01 crc kubenswrapper[4835]: E0201 07:24:01.566634 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:24:02 crc kubenswrapper[4835]: I0201 07:24:02.566626 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:02 crc kubenswrapper[4835]: E0201 07:24:02.566826 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:24:03 crc kubenswrapper[4835]: I0201 07:24:03.566090 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:03 crc kubenswrapper[4835]: I0201 07:24:03.566144 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:03 crc kubenswrapper[4835]: E0201 07:24:03.566324 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:24:03 crc kubenswrapper[4835]: I0201 07:24:03.566692 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:03 crc kubenswrapper[4835]: E0201 07:24:03.567125 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:24:03 crc kubenswrapper[4835]: E0201 07:24:03.567264 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:24:04 crc kubenswrapper[4835]: I0201 07:24:04.566282 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:04 crc kubenswrapper[4835]: E0201 07:24:04.566476 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:24:05 crc kubenswrapper[4835]: I0201 07:24:05.566123 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:05 crc kubenswrapper[4835]: I0201 07:24:05.566220 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:05 crc kubenswrapper[4835]: E0201 07:24:05.566305 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:24:05 crc kubenswrapper[4835]: E0201 07:24:05.566477 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:24:05 crc kubenswrapper[4835]: I0201 07:24:05.566532 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:05 crc kubenswrapper[4835]: E0201 07:24:05.566687 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:24:06 crc kubenswrapper[4835]: I0201 07:24:06.566072 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:06 crc kubenswrapper[4835]: E0201 07:24:06.566401 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:24:07 crc kubenswrapper[4835]: I0201 07:24:07.224173 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-25s9j_c9342eb7-b5ae-47b2-a56d-91ae886e5f0e/kube-multus/1.log" Feb 01 07:24:07 crc kubenswrapper[4835]: I0201 07:24:07.224975 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-25s9j_c9342eb7-b5ae-47b2-a56d-91ae886e5f0e/kube-multus/0.log" Feb 01 07:24:07 crc kubenswrapper[4835]: I0201 07:24:07.225044 4835 generic.go:334] "Generic (PLEG): container finished" podID="c9342eb7-b5ae-47b2-a56d-91ae886e5f0e" containerID="c7f67e3606f318159aa33593125d45284e9277e6418b039476366b909aa6cf27" exitCode=1 Feb 01 07:24:07 crc kubenswrapper[4835]: I0201 07:24:07.225090 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-25s9j" event={"ID":"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e","Type":"ContainerDied","Data":"c7f67e3606f318159aa33593125d45284e9277e6418b039476366b909aa6cf27"} Feb 01 07:24:07 crc kubenswrapper[4835]: I0201 07:24:07.225137 4835 scope.go:117] "RemoveContainer" containerID="213d8504b5482d6fffc521b115b6848e8bdcd8146acfc17bbb3a40c47b1fc8bd" Feb 01 07:24:07 crc kubenswrapper[4835]: I0201 07:24:07.226017 4835 scope.go:117] "RemoveContainer" containerID="c7f67e3606f318159aa33593125d45284e9277e6418b039476366b909aa6cf27" Feb 01 07:24:07 crc kubenswrapper[4835]: E0201 07:24:07.226453 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-25s9j_openshift-multus(c9342eb7-b5ae-47b2-a56d-91ae886e5f0e)\"" pod="openshift-multus/multus-25s9j" podUID="c9342eb7-b5ae-47b2-a56d-91ae886e5f0e" Feb 01 07:24:07 crc kubenswrapper[4835]: E0201 07:24:07.506208 4835 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 01 07:24:07 crc kubenswrapper[4835]: I0201 07:24:07.566030 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:07 crc kubenswrapper[4835]: I0201 07:24:07.566111 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:07 crc kubenswrapper[4835]: I0201 07:24:07.566533 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:07 crc kubenswrapper[4835]: E0201 07:24:07.566822 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:24:07 crc kubenswrapper[4835]: E0201 07:24:07.566951 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:24:07 crc kubenswrapper[4835]: E0201 07:24:07.567067 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:24:07 crc kubenswrapper[4835]: E0201 07:24:07.683647 4835 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 01 07:24:08 crc kubenswrapper[4835]: I0201 07:24:08.231347 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-25s9j_c9342eb7-b5ae-47b2-a56d-91ae886e5f0e/kube-multus/1.log" Feb 01 07:24:08 crc kubenswrapper[4835]: I0201 07:24:08.566506 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:08 crc kubenswrapper[4835]: E0201 07:24:08.566695 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:24:09 crc kubenswrapper[4835]: I0201 07:24:09.566086 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:09 crc kubenswrapper[4835]: I0201 07:24:09.566152 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:09 crc kubenswrapper[4835]: I0201 07:24:09.566108 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:09 crc kubenswrapper[4835]: E0201 07:24:09.566310 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:24:09 crc kubenswrapper[4835]: E0201 07:24:09.566462 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:24:09 crc kubenswrapper[4835]: E0201 07:24:09.566723 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:24:10 crc kubenswrapper[4835]: I0201 07:24:10.566126 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:10 crc kubenswrapper[4835]: E0201 07:24:10.566321 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:24:11 crc kubenswrapper[4835]: I0201 07:24:11.566988 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:11 crc kubenswrapper[4835]: I0201 07:24:11.567071 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:11 crc kubenswrapper[4835]: I0201 07:24:11.567106 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:11 crc kubenswrapper[4835]: E0201 07:24:11.567242 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:24:11 crc kubenswrapper[4835]: E0201 07:24:11.567616 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:24:11 crc kubenswrapper[4835]: E0201 07:24:11.567760 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:24:11 crc kubenswrapper[4835]: I0201 07:24:11.568924 4835 scope.go:117] "RemoveContainer" containerID="9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe" Feb 01 07:24:12 crc kubenswrapper[4835]: I0201 07:24:12.250721 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/3.log" Feb 01 07:24:12 crc kubenswrapper[4835]: I0201 07:24:12.253978 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerStarted","Data":"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca"} Feb 01 07:24:12 crc kubenswrapper[4835]: I0201 07:24:12.254566 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:24:12 crc kubenswrapper[4835]: I0201 07:24:12.286178 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podStartSLOduration=100.286160084 podStartE2EDuration="1m40.286160084s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:12.284841281 +0000 UTC m=+125.405277745" watchObservedRunningTime="2026-02-01 07:24:12.286160084 +0000 UTC m=+125.406596528" Feb 01 07:24:12 crc kubenswrapper[4835]: I0201 07:24:12.565961 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:12 crc kubenswrapper[4835]: E0201 07:24:12.566230 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:24:12 crc kubenswrapper[4835]: I0201 07:24:12.630235 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2msm5"] Feb 01 07:24:12 crc kubenswrapper[4835]: I0201 07:24:12.630529 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:12 crc kubenswrapper[4835]: E0201 07:24:12.630832 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:24:12 crc kubenswrapper[4835]: E0201 07:24:12.685932 4835 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 01 07:24:13 crc kubenswrapper[4835]: I0201 07:24:13.566758 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:13 crc kubenswrapper[4835]: I0201 07:24:13.567003 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:13 crc kubenswrapper[4835]: E0201 07:24:13.567258 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:24:13 crc kubenswrapper[4835]: E0201 07:24:13.567489 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:24:14 crc kubenswrapper[4835]: I0201 07:24:14.566772 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:14 crc kubenswrapper[4835]: I0201 07:24:14.566876 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:14 crc kubenswrapper[4835]: E0201 07:24:14.566955 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:24:14 crc kubenswrapper[4835]: E0201 07:24:14.567110 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:24:15 crc kubenswrapper[4835]: I0201 07:24:15.566002 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:15 crc kubenswrapper[4835]: I0201 07:24:15.566087 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:15 crc kubenswrapper[4835]: E0201 07:24:15.566210 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:24:15 crc kubenswrapper[4835]: E0201 07:24:15.566951 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:24:16 crc kubenswrapper[4835]: I0201 07:24:16.180748 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:24:16 crc kubenswrapper[4835]: I0201 07:24:16.566752 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:16 crc kubenswrapper[4835]: I0201 07:24:16.566774 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:16 crc kubenswrapper[4835]: E0201 07:24:16.566939 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:24:16 crc kubenswrapper[4835]: E0201 07:24:16.567085 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:24:17 crc kubenswrapper[4835]: I0201 07:24:17.565722 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:17 crc kubenswrapper[4835]: I0201 07:24:17.565722 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:17 crc kubenswrapper[4835]: E0201 07:24:17.567943 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:24:17 crc kubenswrapper[4835]: E0201 07:24:17.567854 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:24:17 crc kubenswrapper[4835]: E0201 07:24:17.687576 4835 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 01 07:24:18 crc kubenswrapper[4835]: I0201 07:24:18.566187 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:18 crc kubenswrapper[4835]: I0201 07:24:18.566233 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:18 crc kubenswrapper[4835]: E0201 07:24:18.566378 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:24:18 crc kubenswrapper[4835]: E0201 07:24:18.567008 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:24:19 crc kubenswrapper[4835]: I0201 07:24:19.566558 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:19 crc kubenswrapper[4835]: I0201 07:24:19.566633 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:19 crc kubenswrapper[4835]: E0201 07:24:19.566735 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:24:19 crc kubenswrapper[4835]: E0201 07:24:19.566932 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:24:20 crc kubenswrapper[4835]: I0201 07:24:20.566335 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:20 crc kubenswrapper[4835]: I0201 07:24:20.566474 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:20 crc kubenswrapper[4835]: E0201 07:24:20.566538 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:24:20 crc kubenswrapper[4835]: E0201 07:24:20.566655 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:24:20 crc kubenswrapper[4835]: I0201 07:24:20.567799 4835 scope.go:117] "RemoveContainer" containerID="c7f67e3606f318159aa33593125d45284e9277e6418b039476366b909aa6cf27" Feb 01 07:24:21 crc kubenswrapper[4835]: I0201 07:24:21.291519 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-25s9j_c9342eb7-b5ae-47b2-a56d-91ae886e5f0e/kube-multus/1.log" Feb 01 07:24:21 crc kubenswrapper[4835]: I0201 07:24:21.291946 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-25s9j" event={"ID":"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e","Type":"ContainerStarted","Data":"bc898c375e02b77f5d0608257a9dc49631ac50c8ceab7e6be8a7327889f64c22"} Feb 01 07:24:21 crc kubenswrapper[4835]: I0201 07:24:21.566144 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:21 crc kubenswrapper[4835]: E0201 07:24:21.566293 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 01 07:24:21 crc kubenswrapper[4835]: I0201 07:24:21.566605 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:21 crc kubenswrapper[4835]: E0201 07:24:21.566824 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 01 07:24:22 crc kubenswrapper[4835]: I0201 07:24:22.566714 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:22 crc kubenswrapper[4835]: E0201 07:24:22.566899 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2msm5" podUID="caf346fd-1c47-4f35-a5e6-79f7ac8fcafe" Feb 01 07:24:22 crc kubenswrapper[4835]: I0201 07:24:22.566741 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:22 crc kubenswrapper[4835]: E0201 07:24:22.567004 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 01 07:24:23 crc kubenswrapper[4835]: I0201 07:24:23.566296 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:23 crc kubenswrapper[4835]: I0201 07:24:23.566314 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:23 crc kubenswrapper[4835]: I0201 07:24:23.569786 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 01 07:24:23 crc kubenswrapper[4835]: I0201 07:24:23.575343 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 01 07:24:23 crc kubenswrapper[4835]: I0201 07:24:23.576122 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 01 07:24:23 crc kubenswrapper[4835]: I0201 07:24:23.576570 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 01 07:24:24 crc kubenswrapper[4835]: I0201 07:24:24.566106 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:24 crc kubenswrapper[4835]: I0201 07:24:24.566124 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:24 crc kubenswrapper[4835]: I0201 07:24:24.572116 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 01 07:24:24 crc kubenswrapper[4835]: I0201 07:24:24.572189 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.388555 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.436460 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bztv4"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.437364 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.438533 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-547k6"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.439349 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.440223 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.440860 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.446339 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.446658 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.447088 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.447172 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.449133 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.451284 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.451582 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.451755 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.452129 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.452276 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.452921 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.453056 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.453153 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.453164 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.455703 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.453955 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.456129 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.457062 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46f4b60b-0076-4087-b541-4617c3752687-client-ca\") pod \"route-controller-manager-6576b87f9c-2qjjt\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.457153 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-etcd-client\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.457222 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-etcd-serving-ca\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.457281 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.457317 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-image-import-ca\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.457391 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j58tf\" (UniqueName: \"kubernetes.io/projected/5b3e26c6-a029-4767-b371-579d2c682296-kube-api-access-j58tf\") pod \"machine-approver-56656f9798-547k6\" (UID: \"5b3e26c6-a029-4767-b371-579d2c682296\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.457499 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.457495 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-node-pullsecrets\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.457701 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f4b60b-0076-4087-b541-4617c3752687-config\") pod \"route-controller-manager-6576b87f9c-2qjjt\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.457756 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5b3e26c6-a029-4767-b371-579d2c682296-auth-proxy-config\") pod \"machine-approver-56656f9798-547k6\" (UID: \"5b3e26c6-a029-4767-b371-579d2c682296\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.457818 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5b3e26c6-a029-4767-b371-579d2c682296-machine-approver-tls\") pod \"machine-approver-56656f9798-547k6\" (UID: \"5b3e26c6-a029-4767-b371-579d2c682296\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.457878 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.458022 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-serving-cert\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.458191 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-config\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.458270 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-audit-dir\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.458335 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qckj9\" (UniqueName: \"kubernetes.io/projected/46f4b60b-0076-4087-b541-4617c3752687-kube-api-access-qckj9\") pod \"route-controller-manager-6576b87f9c-2qjjt\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.458503 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-encryption-config\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.458573 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b3e26c6-a029-4767-b371-579d2c682296-config\") pod \"machine-approver-56656f9798-547k6\" (UID: \"5b3e26c6-a029-4767-b371-579d2c682296\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.458628 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2chhv\" (UniqueName: \"kubernetes.io/projected/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-kube-api-access-2chhv\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.458716 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46f4b60b-0076-4087-b541-4617c3752687-serving-cert\") pod \"route-controller-manager-6576b87f9c-2qjjt\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.458784 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-audit\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.460435 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-whqd4"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.461369 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.465693 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.466059 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.466539 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.467115 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.469824 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.478727 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.479431 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.480442 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tkff4"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.480965 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.481068 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.482292 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.484042 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-t4w45"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.484656 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.489868 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.490405 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.491256 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.492608 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-x4ddr"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.493234 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.494328 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-8hgqx"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.494962 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.541730 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.542250 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.544090 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.544484 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.544923 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hpgql"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.545198 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.546255 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.546338 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.546480 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.550952 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.551132 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.553040 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.553152 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.553251 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.553350 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.553665 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.553908 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.554064 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.554182 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.554355 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.554462 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.554534 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.554680 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.554720 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.554813 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.554852 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.554684 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.554972 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.560023 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.560442 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xgqrp"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.560742 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-k8v8n"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.561014 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-k8v8n" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.561301 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.561577 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xgqrp" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562005 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562271 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562712 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9154a093-1841-44f5-a71d-e42f5c19dfba-console-config\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562737 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-audit\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562769 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562788 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03f29b26-d2bd-48e2-9804-c90a5315658c-audit-dir\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562805 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8924e4db-3c47-4e66-90d1-e74e49f3a65d-images\") pod \"machine-api-operator-5694c8668f-whqd4\" (UID: \"8924e4db-3c47-4e66-90d1-e74e49f3a65d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562819 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562843 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46f4b60b-0076-4087-b541-4617c3752687-client-ca\") pod \"route-controller-manager-6576b87f9c-2qjjt\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562863 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb0c8a64-40d8-4fff-8ca4-b573df90cd88-trusted-ca\") pod \"console-operator-58897d9998-t4w45\" (UID: \"fb0c8a64-40d8-4fff-8ca4-b573df90cd88\") " pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562878 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td8z7\" (UniqueName: \"kubernetes.io/projected/9154a093-1841-44f5-a71d-e42f5c19dfba-kube-api-access-td8z7\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562895 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-etcd-client\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562910 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-etcd-serving-ca\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562926 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cad3b595-c72f-49b8-92e0-932f9f591375-service-ca-bundle\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562944 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03f29b26-d2bd-48e2-9804-c90a5315658c-serving-cert\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562959 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9154a093-1841-44f5-a71d-e42f5c19dfba-service-ca\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562974 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.562998 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-image-import-ca\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563015 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cad3b595-c72f-49b8-92e0-932f9f591375-serving-cert\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563030 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjrz6\" (UniqueName: \"kubernetes.io/projected/cad3b595-c72f-49b8-92e0-932f9f591375-kube-api-access-bjrz6\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563044 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb0c8a64-40d8-4fff-8ca4-b573df90cd88-config\") pod \"console-operator-58897d9998-t4w45\" (UID: \"fb0c8a64-40d8-4fff-8ca4-b573df90cd88\") " pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563061 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j58tf\" (UniqueName: \"kubernetes.io/projected/5b3e26c6-a029-4767-b371-579d2c682296-kube-api-access-j58tf\") pod \"machine-approver-56656f9798-547k6\" (UID: \"5b3e26c6-a029-4767-b371-579d2c682296\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563079 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cad3b595-c72f-49b8-92e0-932f9f591375-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563095 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563111 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-node-pullsecrets\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563126 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f4b60b-0076-4087-b541-4617c3752687-config\") pod \"route-controller-manager-6576b87f9c-2qjjt\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563144 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5b3e26c6-a029-4767-b371-579d2c682296-auth-proxy-config\") pod \"machine-approver-56656f9798-547k6\" (UID: \"5b3e26c6-a029-4767-b371-579d2c682296\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563159 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5rsk\" (UniqueName: \"kubernetes.io/projected/03f29b26-d2bd-48e2-9804-c90a5315658c-kube-api-access-m5rsk\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563175 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8924e4db-3c47-4e66-90d1-e74e49f3a65d-config\") pod \"machine-api-operator-5694c8668f-whqd4\" (UID: \"8924e4db-3c47-4e66-90d1-e74e49f3a65d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563191 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94wjd\" (UniqueName: \"kubernetes.io/projected/8924e4db-3c47-4e66-90d1-e74e49f3a65d-kube-api-access-94wjd\") pod \"machine-api-operator-5694c8668f-whqd4\" (UID: \"8924e4db-3c47-4e66-90d1-e74e49f3a65d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563207 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-audit-policies\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563221 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563236 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5b3e26c6-a029-4767-b371-579d2c682296-machine-approver-tls\") pod \"machine-approver-56656f9798-547k6\" (UID: \"5b3e26c6-a029-4767-b371-579d2c682296\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563251 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563266 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563280 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9154a093-1841-44f5-a71d-e42f5c19dfba-console-oauth-config\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563296 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-serving-cert\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563311 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-audit-dir\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563326 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563343 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/03f29b26-d2bd-48e2-9804-c90a5315658c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563357 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-config\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563378 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-audit-dir\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563393 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9154a093-1841-44f5-a71d-e42f5c19dfba-console-serving-cert\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563423 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjc5q\" (UniqueName: \"kubernetes.io/projected/90833a57-ccdb-452f-b86a-7741f52c5a80-kube-api-access-bjc5q\") pod \"openshift-config-operator-7777fb866f-k4l2m\" (UID: \"90833a57-ccdb-452f-b86a-7741f52c5a80\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563441 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dj84j\" (UID: \"19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563457 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563472 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfxnz\" (UniqueName: \"kubernetes.io/projected/19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1-kube-api-access-pfxnz\") pod \"openshift-apiserver-operator-796bbdcf4f-dj84j\" (UID: \"19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563490 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qckj9\" (UniqueName: \"kubernetes.io/projected/46f4b60b-0076-4087-b541-4617c3752687-kube-api-access-qckj9\") pod \"route-controller-manager-6576b87f9c-2qjjt\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563505 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76xnv\" (UniqueName: \"kubernetes.io/projected/fb0c8a64-40d8-4fff-8ca4-b573df90cd88-kube-api-access-76xnv\") pod \"console-operator-58897d9998-t4w45\" (UID: \"fb0c8a64-40d8-4fff-8ca4-b573df90cd88\") " pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563518 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/03f29b26-d2bd-48e2-9804-c90a5315658c-etcd-client\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563533 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dj84j\" (UID: \"19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563546 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad3b595-c72f-49b8-92e0-932f9f591375-config\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563569 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9154a093-1841-44f5-a71d-e42f5c19dfba-oauth-serving-cert\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563583 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8924e4db-3c47-4e66-90d1-e74e49f3a65d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-whqd4\" (UID: \"8924e4db-3c47-4e66-90d1-e74e49f3a65d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563606 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563622 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563637 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563651 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90833a57-ccdb-452f-b86a-7741f52c5a80-serving-cert\") pod \"openshift-config-operator-7777fb866f-k4l2m\" (UID: \"90833a57-ccdb-452f-b86a-7741f52c5a80\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563672 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03f29b26-d2bd-48e2-9804-c90a5315658c-audit-policies\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563689 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/03f29b26-d2bd-48e2-9804-c90a5315658c-encryption-config\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563707 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b3e26c6-a029-4767-b371-579d2c682296-config\") pod \"machine-approver-56656f9798-547k6\" (UID: \"5b3e26c6-a029-4767-b371-579d2c682296\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563721 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-encryption-config\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563739 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9154a093-1841-44f5-a71d-e42f5c19dfba-trusted-ca-bundle\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563755 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2chhv\" (UniqueName: \"kubernetes.io/projected/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-kube-api-access-2chhv\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563769 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03f29b26-d2bd-48e2-9804-c90a5315658c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563785 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nptzx\" (UniqueName: \"kubernetes.io/projected/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-kube-api-access-nptzx\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563801 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46f4b60b-0076-4087-b541-4617c3752687-serving-cert\") pod \"route-controller-manager-6576b87f9c-2qjjt\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563816 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/90833a57-ccdb-452f-b86a-7741f52c5a80-available-featuregates\") pod \"openshift-config-operator-7777fb866f-k4l2m\" (UID: \"90833a57-ccdb-452f-b86a-7741f52c5a80\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.563832 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb0c8a64-40d8-4fff-8ca4-b573df90cd88-serving-cert\") pod \"console-operator-58897d9998-t4w45\" (UID: \"fb0c8a64-40d8-4fff-8ca4-b573df90cd88\") " pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.564365 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.564388 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-audit\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.564748 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-66fqg"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.564980 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zq4gf"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.565274 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.565343 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46f4b60b-0076-4087-b541-4617c3752687-client-ca\") pod \"route-controller-manager-6576b87f9c-2qjjt\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.565403 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-node-pullsecrets\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.565524 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.565641 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.566099 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f4b60b-0076-4087-b541-4617c3752687-config\") pod \"route-controller-manager-6576b87f9c-2qjjt\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.567684 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5b3e26c6-a029-4767-b371-579d2c682296-auth-proxy-config\") pod \"machine-approver-56656f9798-547k6\" (UID: \"5b3e26c6-a029-4767-b371-579d2c682296\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.567887 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-etcd-serving-ca\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.568482 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b3e26c6-a029-4767-b371-579d2c682296-config\") pod \"machine-approver-56656f9798-547k6\" (UID: \"5b3e26c6-a029-4767-b371-579d2c682296\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.568515 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-config\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.568552 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-audit-dir\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.571496 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-etcd-client\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.571753 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-image-import-ca\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.572154 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bztv4"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.572552 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46f4b60b-0076-4087-b541-4617c3752687-serving-cert\") pod \"route-controller-manager-6576b87f9c-2qjjt\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.574467 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5b3e26c6-a029-4767-b371-579d2c682296-machine-approver-tls\") pod \"machine-approver-56656f9798-547k6\" (UID: \"5b3e26c6-a029-4767-b371-579d2c682296\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.574520 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.574856 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.574987 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.575235 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.575375 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.575527 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.575949 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-serving-cert\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.578197 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-sdz4h"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.579018 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.579472 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7nw98"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.580057 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7nw98" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.580711 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.581241 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.581880 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.582204 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.583740 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.584136 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.584855 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-encryption-config\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.584940 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.585850 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.591905 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.592226 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.592337 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.592458 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.592620 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.592730 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.592845 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.592880 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.593012 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.593101 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.596820 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.598279 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.609706 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.609858 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.610019 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.611016 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.610093 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.611781 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.611998 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.612196 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.612276 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.612818 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.612933 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.613114 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.613402 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.614293 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.614885 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fbdw8"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.615673 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.616202 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.616401 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.619518 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.620096 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.620284 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.620346 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fbdw8" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.620707 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.621216 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.621332 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.621563 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.622687 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.624168 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.624288 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.624842 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.625345 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.625750 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.636334 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.637286 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.637659 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.637922 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.637941 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.637684 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.645568 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.646206 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.646212 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.646519 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.646909 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.647395 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.647516 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.648634 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4qc29"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.649528 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.653588 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.655470 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.655812 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.656030 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.657023 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.657261 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.657556 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.661846 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.661904 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664311 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb0c8a64-40d8-4fff-8ca4-b573df90cd88-serving-cert\") pod \"console-operator-58897d9998-t4w45\" (UID: \"fb0c8a64-40d8-4fff-8ca4-b573df90cd88\") " pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664347 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a08e2a1-3eff-4271-bfd3-e0366c8da3e0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kp87b\" (UID: \"5a08e2a1-3eff-4271-bfd3-e0366c8da3e0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664368 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29ce863d-02cf-43c6-a249-bfef15cf04be-config\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664387 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqdn6\" (UniqueName: \"kubernetes.io/projected/230baada-7ff6-4b95-b44f-b46e54fe1375-kube-api-access-sqdn6\") pod \"machine-config-controller-84d6567774-f9wvq\" (UID: \"230baada-7ff6-4b95-b44f-b46e54fe1375\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664424 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttz52\" (UniqueName: \"kubernetes.io/projected/79f19c84-0217-4b08-8b4d-663096ce67b4-kube-api-access-ttz52\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664443 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9154a093-1841-44f5-a71d-e42f5c19dfba-console-config\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664459 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664474 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pqcsc\" (UID: \"9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664491 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/92112e1c-6b23-4d10-9f2b-0e33616c96f5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-g4r2s\" (UID: \"92112e1c-6b23-4d10-9f2b-0e33616c96f5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664509 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664524 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03f29b26-d2bd-48e2-9804-c90a5315658c-audit-dir\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664543 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8924e4db-3c47-4e66-90d1-e74e49f3a65d-images\") pod \"machine-api-operator-5694c8668f-whqd4\" (UID: \"8924e4db-3c47-4e66-90d1-e74e49f3a65d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664559 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664582 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb0c8a64-40d8-4fff-8ca4-b573df90cd88-trusted-ca\") pod \"console-operator-58897d9998-t4w45\" (UID: \"fb0c8a64-40d8-4fff-8ca4-b573df90cd88\") " pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664597 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td8z7\" (UniqueName: \"kubernetes.io/projected/9154a093-1841-44f5-a71d-e42f5c19dfba-kube-api-access-td8z7\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664618 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-config\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664637 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cad3b595-c72f-49b8-92e0-932f9f591375-service-ca-bundle\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664659 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/29ce863d-02cf-43c6-a249-bfef15cf04be-etcd-service-ca\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664673 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4j9d\" (UniqueName: \"kubernetes.io/projected/29ce863d-02cf-43c6-a249-bfef15cf04be-kube-api-access-b4j9d\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664688 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a86cb99d-3be8-4acb-98f7-87c5df66c339-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hch5m\" (UID: \"a86cb99d-3be8-4acb-98f7-87c5df66c339\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664702 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79f19c84-0217-4b08-8b4d-663096ce67b4-serving-cert\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664718 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfrgj\" (UniqueName: \"kubernetes.io/projected/865ec974-02ed-4218-a599-cf69b6f0a538-kube-api-access-vfrgj\") pod \"cluster-image-registry-operator-dc59b4c8b-5bhlf\" (UID: \"865ec974-02ed-4218-a599-cf69b6f0a538\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664754 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-service-ca-bundle\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664771 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a08e2a1-3eff-4271-bfd3-e0366c8da3e0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kp87b\" (UID: \"5a08e2a1-3eff-4271-bfd3-e0366c8da3e0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664788 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/863e130d-2f68-47ef-8b6c-2871d38a2282-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mflcb\" (UID: \"863e130d-2f68-47ef-8b6c-2871d38a2282\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664806 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tjgn\" (UniqueName: \"kubernetes.io/projected/a86cb99d-3be8-4acb-98f7-87c5df66c339-kube-api-access-2tjgn\") pod \"machine-config-operator-74547568cd-hch5m\" (UID: \"a86cb99d-3be8-4acb-98f7-87c5df66c339\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664835 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03f29b26-d2bd-48e2-9804-c90a5315658c-serving-cert\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664850 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9154a093-1841-44f5-a71d-e42f5c19dfba-service-ca\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664866 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664882 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/865ec974-02ed-4218-a599-cf69b6f0a538-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5bhlf\" (UID: \"865ec974-02ed-4218-a599-cf69b6f0a538\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664898 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a08e2a1-3eff-4271-bfd3-e0366c8da3e0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kp87b\" (UID: \"5a08e2a1-3eff-4271-bfd3-e0366c8da3e0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664922 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjrz6\" (UniqueName: \"kubernetes.io/projected/cad3b595-c72f-49b8-92e0-932f9f591375-kube-api-access-bjrz6\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664940 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb0c8a64-40d8-4fff-8ca4-b573df90cd88-config\") pod \"console-operator-58897d9998-t4w45\" (UID: \"fb0c8a64-40d8-4fff-8ca4-b573df90cd88\") " pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664954 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cad3b595-c72f-49b8-92e0-932f9f591375-serving-cert\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664977 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cad3b595-c72f-49b8-92e0-932f9f591375-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.664992 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665009 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a86cb99d-3be8-4acb-98f7-87c5df66c339-images\") pod \"machine-config-operator-74547568cd-hch5m\" (UID: \"a86cb99d-3be8-4acb-98f7-87c5df66c339\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665031 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz2xl\" (UniqueName: \"kubernetes.io/projected/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-kube-api-access-qz2xl\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665055 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5rsk\" (UniqueName: \"kubernetes.io/projected/03f29b26-d2bd-48e2-9804-c90a5315658c-kube-api-access-m5rsk\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665074 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8924e4db-3c47-4e66-90d1-e74e49f3a65d-config\") pod \"machine-api-operator-5694c8668f-whqd4\" (UID: \"8924e4db-3c47-4e66-90d1-e74e49f3a65d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665091 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94wjd\" (UniqueName: \"kubernetes.io/projected/8924e4db-3c47-4e66-90d1-e74e49f3a65d-kube-api-access-94wjd\") pod \"machine-api-operator-5694c8668f-whqd4\" (UID: \"8924e4db-3c47-4e66-90d1-e74e49f3a65d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665108 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-audit-policies\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665124 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665149 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-metrics-certs\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665164 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/29ce863d-02cf-43c6-a249-bfef15cf04be-etcd-ca\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665181 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665196 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/29ce863d-02cf-43c6-a249-bfef15cf04be-etcd-client\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665217 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/863e130d-2f68-47ef-8b6c-2871d38a2282-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mflcb\" (UID: \"863e130d-2f68-47ef-8b6c-2871d38a2282\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665238 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/230baada-7ff6-4b95-b44f-b46e54fe1375-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-f9wvq\" (UID: \"230baada-7ff6-4b95-b44f-b46e54fe1375\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665275 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9154a093-1841-44f5-a71d-e42f5c19dfba-console-oauth-config\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665296 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863e130d-2f68-47ef-8b6c-2871d38a2282-config\") pod \"kube-apiserver-operator-766d6c64bb-mflcb\" (UID: \"863e130d-2f68-47ef-8b6c-2871d38a2282\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665328 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-audit-dir\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665353 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9154a093-1841-44f5-a71d-e42f5c19dfba-console-config\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665357 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665456 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg2rk\" (UniqueName: \"kubernetes.io/projected/8589782d-8533-4419-b9bf-115446144a39-kube-api-access-gg2rk\") pod \"migrator-59844c95c7-7nw98\" (UID: \"8589782d-8533-4419-b9bf-115446144a39\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7nw98" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665496 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/03f29b26-d2bd-48e2-9804-c90a5315658c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665525 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-stats-auth\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665554 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtlhj\" (UniqueName: \"kubernetes.io/projected/d597b1c7-2562-45a2-b301-14d0db548bc8-kube-api-access-xtlhj\") pod \"kube-storage-version-migrator-operator-b67b599dd-nr86z\" (UID: \"d597b1c7-2562-45a2-b301-14d0db548bc8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665581 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlkfg\" (UniqueName: \"kubernetes.io/projected/92112e1c-6b23-4d10-9f2b-0e33616c96f5-kube-api-access-qlkfg\") pod \"cluster-samples-operator-665b6dd947-g4r2s\" (UID: \"92112e1c-6b23-4d10-9f2b-0e33616c96f5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665615 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdjzt\" (UniqueName: \"kubernetes.io/projected/9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe-kube-api-access-kdjzt\") pod \"openshift-controller-manager-operator-756b6f6bc6-pqcsc\" (UID: \"9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665661 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9154a093-1841-44f5-a71d-e42f5c19dfba-console-serving-cert\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665688 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjc5q\" (UniqueName: \"kubernetes.io/projected/90833a57-ccdb-452f-b86a-7741f52c5a80-kube-api-access-bjc5q\") pod \"openshift-config-operator-7777fb866f-k4l2m\" (UID: \"90833a57-ccdb-452f-b86a-7741f52c5a80\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665715 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dj84j\" (UID: \"19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665743 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665766 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ce863d-02cf-43c6-a249-bfef15cf04be-serving-cert\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665823 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pqcsc\" (UID: \"9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665848 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/60b0275a-57b6-482d-b046-ffd270801add-profile-collector-cert\") pod \"olm-operator-6b444d44fb-p5fjs\" (UID: \"60b0275a-57b6-482d-b046-ffd270801add\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665874 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-t4w45"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665925 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mjg6g"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.666456 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.666482 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.666503 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.666457 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03f29b26-d2bd-48e2-9804-c90a5315658c-audit-dir\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667284 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667477 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8924e4db-3c47-4e66-90d1-e74e49f3a65d-images\") pod \"machine-api-operator-5694c8668f-whqd4\" (UID: \"8924e4db-3c47-4e66-90d1-e74e49f3a65d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667510 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-audit-policies\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.665879 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfxnz\" (UniqueName: \"kubernetes.io/projected/19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1-kube-api-access-pfxnz\") pod \"openshift-apiserver-operator-796bbdcf4f-dj84j\" (UID: \"19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667605 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-client-ca\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667639 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76xnv\" (UniqueName: \"kubernetes.io/projected/fb0c8a64-40d8-4fff-8ca4-b573df90cd88-kube-api-access-76xnv\") pod \"console-operator-58897d9998-t4w45\" (UID: \"fb0c8a64-40d8-4fff-8ca4-b573df90cd88\") " pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667661 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/03f29b26-d2bd-48e2-9804-c90a5315658c-etcd-client\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667679 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-default-certificate\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667686 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667695 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhqf4\" (UniqueName: \"kubernetes.io/projected/60b0275a-57b6-482d-b046-ffd270801add-kube-api-access-fhqf4\") pod \"olm-operator-6b444d44fb-p5fjs\" (UID: \"60b0275a-57b6-482d-b046-ffd270801add\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667716 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dj84j\" (UID: \"19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667841 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad3b595-c72f-49b8-92e0-932f9f591375-config\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667862 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a86cb99d-3be8-4acb-98f7-87c5df66c339-proxy-tls\") pod \"machine-config-operator-74547568cd-hch5m\" (UID: \"a86cb99d-3be8-4acb-98f7-87c5df66c339\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667919 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9154a093-1841-44f5-a71d-e42f5c19dfba-oauth-serving-cert\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667935 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8924e4db-3c47-4e66-90d1-e74e49f3a65d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-whqd4\" (UID: \"8924e4db-3c47-4e66-90d1-e74e49f3a65d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667952 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87528d59-5bdb-4e92-8d6e-062005390f6f-metrics-tls\") pod \"dns-operator-744455d44c-xgqrp\" (UID: \"87528d59-5bdb-4e92-8d6e-062005390f6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-xgqrp" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667976 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.667976 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.677022 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb0c8a64-40d8-4fff-8ca4-b573df90cd88-trusted-ca\") pod \"console-operator-58897d9998-t4w45\" (UID: \"fb0c8a64-40d8-4fff-8ca4-b573df90cd88\") " pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.677461 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/03f29b26-d2bd-48e2-9804-c90a5315658c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.677714 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cad3b595-c72f-49b8-92e0-932f9f591375-service-ca-bundle\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.678324 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.678579 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb0c8a64-40d8-4fff-8ca4-b573df90cd88-config\") pod \"console-operator-58897d9998-t4w45\" (UID: \"fb0c8a64-40d8-4fff-8ca4-b573df90cd88\") " pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.679584 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.680672 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.680760 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcrb9\" (UniqueName: \"kubernetes.io/projected/87528d59-5bdb-4e92-8d6e-062005390f6f-kube-api-access-lcrb9\") pod \"dns-operator-744455d44c-xgqrp\" (UID: \"87528d59-5bdb-4e92-8d6e-062005390f6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-xgqrp" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.680863 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03f29b26-d2bd-48e2-9804-c90a5315658c-audit-policies\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.680942 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/60b0275a-57b6-482d-b046-ffd270801add-srv-cert\") pod \"olm-operator-6b444d44fb-p5fjs\" (UID: \"60b0275a-57b6-482d-b046-ffd270801add\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.681078 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90833a57-ccdb-452f-b86a-7741f52c5a80-serving-cert\") pod \"openshift-config-operator-7777fb866f-k4l2m\" (UID: \"90833a57-ccdb-452f-b86a-7741f52c5a80\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.681164 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/03f29b26-d2bd-48e2-9804-c90a5315658c-encryption-config\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.681258 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9154a093-1841-44f5-a71d-e42f5c19dfba-trusted-ca-bundle\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.681330 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d597b1c7-2562-45a2-b301-14d0db548bc8-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nr86z\" (UID: \"d597b1c7-2562-45a2-b301-14d0db548bc8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.681443 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03f29b26-d2bd-48e2-9804-c90a5315658c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.681525 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nptzx\" (UniqueName: \"kubernetes.io/projected/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-kube-api-access-nptzx\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.684983 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/865ec974-02ed-4218-a599-cf69b6f0a538-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5bhlf\" (UID: \"865ec974-02ed-4218-a599-cf69b6f0a538\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.685114 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/230baada-7ff6-4b95-b44f-b46e54fe1375-proxy-tls\") pod \"machine-config-controller-84d6567774-f9wvq\" (UID: \"230baada-7ff6-4b95-b44f-b46e54fe1375\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.685199 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/90833a57-ccdb-452f-b86a-7741f52c5a80-available-featuregates\") pod \"openshift-config-operator-7777fb866f-k4l2m\" (UID: \"90833a57-ccdb-452f-b86a-7741f52c5a80\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.685292 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/865ec974-02ed-4218-a599-cf69b6f0a538-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5bhlf\" (UID: \"865ec974-02ed-4218-a599-cf69b6f0a538\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.685361 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d597b1c7-2562-45a2-b301-14d0db548bc8-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nr86z\" (UID: \"d597b1c7-2562-45a2-b301-14d0db548bc8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.679924 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dj84j\" (UID: \"19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.686162 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/03f29b26-d2bd-48e2-9804-c90a5315658c-etcd-client\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.681347 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.687398 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03f29b26-d2bd-48e2-9804-c90a5315658c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.681804 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.682530 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dj84j\" (UID: \"19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.684325 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9154a093-1841-44f5-a71d-e42f5c19dfba-oauth-serving-cert\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.688439 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.688878 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.689212 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/90833a57-ccdb-452f-b86a-7741f52c5a80-available-featuregates\") pod \"openshift-config-operator-7777fb866f-k4l2m\" (UID: \"90833a57-ccdb-452f-b86a-7741f52c5a80\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.684809 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad3b595-c72f-49b8-92e0-932f9f591375-config\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.684804 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-whqd4"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.689614 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.690637 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tkff4"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.691121 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.691505 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.680262 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03f29b26-d2bd-48e2-9804-c90a5315658c-serving-cert\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.680796 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9154a093-1841-44f5-a71d-e42f5c19dfba-service-ca\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.691916 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8hgqx"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.691998 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-x4ddr"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.691974 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03f29b26-d2bd-48e2-9804-c90a5315658c-audit-policies\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.692273 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.690848 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-audit-dir\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.692719 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.693182 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8924e4db-3c47-4e66-90d1-e74e49f3a65d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-whqd4\" (UID: \"8924e4db-3c47-4e66-90d1-e74e49f3a65d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.693792 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9154a093-1841-44f5-a71d-e42f5c19dfba-console-oauth-config\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.693901 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cad3b595-c72f-49b8-92e0-932f9f591375-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.693926 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.694543 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8924e4db-3c47-4e66-90d1-e74e49f3a65d-config\") pod \"machine-api-operator-5694c8668f-whqd4\" (UID: \"8924e4db-3c47-4e66-90d1-e74e49f3a65d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.694870 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9154a093-1841-44f5-a71d-e42f5c19dfba-trusted-ca-bundle\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.695191 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90833a57-ccdb-452f-b86a-7741f52c5a80-serving-cert\") pod \"openshift-config-operator-7777fb866f-k4l2m\" (UID: \"90833a57-ccdb-452f-b86a-7741f52c5a80\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.695876 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.696082 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.696470 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j58tf\" (UniqueName: \"kubernetes.io/projected/5b3e26c6-a029-4767-b371-579d2c682296-kube-api-access-j58tf\") pod \"machine-approver-56656f9798-547k6\" (UID: \"5b3e26c6-a029-4767-b371-579d2c682296\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.697539 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cad3b595-c72f-49b8-92e0-932f9f591375-serving-cert\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.697553 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.698099 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb0c8a64-40d8-4fff-8ca4-b573df90cd88-serving-cert\") pod \"console-operator-58897d9998-t4w45\" (UID: \"fb0c8a64-40d8-4fff-8ca4-b573df90cd88\") " pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.698163 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.698613 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xgqrp"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.698922 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.699213 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/03f29b26-d2bd-48e2-9804-c90a5315658c-encryption-config\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.699275 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9154a093-1841-44f5-a71d-e42f5c19dfba-console-serving-cert\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.699530 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.700476 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.701784 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.705172 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.708486 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.710443 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.711954 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.713281 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.714340 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.715205 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-gmr7g"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.716120 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gmr7g" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.716203 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hpgql"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.717206 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.718202 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-k8v8n"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.719273 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.720194 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.721150 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7nw98"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.722136 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.723100 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zq4gf"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.724089 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xwsnp"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.725124 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.725211 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.726159 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.726325 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-66fqg"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.727115 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.728114 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.729048 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.730059 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.731276 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gmr7g"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.732466 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.732977 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.733938 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4qc29"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.735245 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mjg6g"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.736287 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fbdw8"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.737286 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xwsnp"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.738329 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.738592 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.739672 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-2vc59"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.740202 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2vc59" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.740730 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-shvm4"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.741562 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-shvm4" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.742187 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-shvm4"] Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.758820 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786565 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/60b0275a-57b6-482d-b046-ffd270801add-srv-cert\") pod \"olm-operator-6b444d44fb-p5fjs\" (UID: \"60b0275a-57b6-482d-b046-ffd270801add\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786621 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d597b1c7-2562-45a2-b301-14d0db548bc8-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nr86z\" (UID: \"d597b1c7-2562-45a2-b301-14d0db548bc8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786671 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/865ec974-02ed-4218-a599-cf69b6f0a538-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5bhlf\" (UID: \"865ec974-02ed-4218-a599-cf69b6f0a538\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786690 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/230baada-7ff6-4b95-b44f-b46e54fe1375-proxy-tls\") pod \"machine-config-controller-84d6567774-f9wvq\" (UID: \"230baada-7ff6-4b95-b44f-b46e54fe1375\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786706 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/865ec974-02ed-4218-a599-cf69b6f0a538-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5bhlf\" (UID: \"865ec974-02ed-4218-a599-cf69b6f0a538\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786722 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d597b1c7-2562-45a2-b301-14d0db548bc8-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nr86z\" (UID: \"d597b1c7-2562-45a2-b301-14d0db548bc8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786763 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a08e2a1-3eff-4271-bfd3-e0366c8da3e0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kp87b\" (UID: \"5a08e2a1-3eff-4271-bfd3-e0366c8da3e0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786785 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttz52\" (UniqueName: \"kubernetes.io/projected/79f19c84-0217-4b08-8b4d-663096ce67b4-kube-api-access-ttz52\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786805 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29ce863d-02cf-43c6-a249-bfef15cf04be-config\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786841 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqdn6\" (UniqueName: \"kubernetes.io/projected/230baada-7ff6-4b95-b44f-b46e54fe1375-kube-api-access-sqdn6\") pod \"machine-config-controller-84d6567774-f9wvq\" (UID: \"230baada-7ff6-4b95-b44f-b46e54fe1375\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786879 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786915 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pqcsc\" (UID: \"9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786934 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/92112e1c-6b23-4d10-9f2b-0e33616c96f5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-g4r2s\" (UID: \"92112e1c-6b23-4d10-9f2b-0e33616c96f5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.786963 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-config\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787010 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/29ce863d-02cf-43c6-a249-bfef15cf04be-etcd-service-ca\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787026 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4j9d\" (UniqueName: \"kubernetes.io/projected/29ce863d-02cf-43c6-a249-bfef15cf04be-kube-api-access-b4j9d\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787044 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a86cb99d-3be8-4acb-98f7-87c5df66c339-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hch5m\" (UID: \"a86cb99d-3be8-4acb-98f7-87c5df66c339\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787081 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79f19c84-0217-4b08-8b4d-663096ce67b4-serving-cert\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787096 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a08e2a1-3eff-4271-bfd3-e0366c8da3e0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kp87b\" (UID: \"5a08e2a1-3eff-4271-bfd3-e0366c8da3e0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787114 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/863e130d-2f68-47ef-8b6c-2871d38a2282-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mflcb\" (UID: \"863e130d-2f68-47ef-8b6c-2871d38a2282\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787148 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tjgn\" (UniqueName: \"kubernetes.io/projected/a86cb99d-3be8-4acb-98f7-87c5df66c339-kube-api-access-2tjgn\") pod \"machine-config-operator-74547568cd-hch5m\" (UID: \"a86cb99d-3be8-4acb-98f7-87c5df66c339\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787158 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787167 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfrgj\" (UniqueName: \"kubernetes.io/projected/865ec974-02ed-4218-a599-cf69b6f0a538-kube-api-access-vfrgj\") pod \"cluster-image-registry-operator-dc59b4c8b-5bhlf\" (UID: \"865ec974-02ed-4218-a599-cf69b6f0a538\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787185 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-service-ca-bundle\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787202 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/865ec974-02ed-4218-a599-cf69b6f0a538-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5bhlf\" (UID: \"865ec974-02ed-4218-a599-cf69b6f0a538\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787240 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a08e2a1-3eff-4271-bfd3-e0366c8da3e0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kp87b\" (UID: \"5a08e2a1-3eff-4271-bfd3-e0366c8da3e0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787274 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a86cb99d-3be8-4acb-98f7-87c5df66c339-images\") pod \"machine-config-operator-74547568cd-hch5m\" (UID: \"a86cb99d-3be8-4acb-98f7-87c5df66c339\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.787312 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz2xl\" (UniqueName: \"kubernetes.io/projected/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-kube-api-access-qz2xl\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788111 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-metrics-certs\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788132 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/29ce863d-02cf-43c6-a249-bfef15cf04be-etcd-ca\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788311 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a86cb99d-3be8-4acb-98f7-87c5df66c339-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hch5m\" (UID: \"a86cb99d-3be8-4acb-98f7-87c5df66c339\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788317 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/29ce863d-02cf-43c6-a249-bfef15cf04be-etcd-client\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788381 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/863e130d-2f68-47ef-8b6c-2871d38a2282-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mflcb\" (UID: \"863e130d-2f68-47ef-8b6c-2871d38a2282\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788401 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/230baada-7ff6-4b95-b44f-b46e54fe1375-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-f9wvq\" (UID: \"230baada-7ff6-4b95-b44f-b46e54fe1375\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788442 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863e130d-2f68-47ef-8b6c-2871d38a2282-config\") pod \"kube-apiserver-operator-766d6c64bb-mflcb\" (UID: \"863e130d-2f68-47ef-8b6c-2871d38a2282\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788466 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg2rk\" (UniqueName: \"kubernetes.io/projected/8589782d-8533-4419-b9bf-115446144a39-kube-api-access-gg2rk\") pod \"migrator-59844c95c7-7nw98\" (UID: \"8589782d-8533-4419-b9bf-115446144a39\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7nw98" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788489 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-stats-auth\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788508 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtlhj\" (UniqueName: \"kubernetes.io/projected/d597b1c7-2562-45a2-b301-14d0db548bc8-kube-api-access-xtlhj\") pod \"kube-storage-version-migrator-operator-b67b599dd-nr86z\" (UID: \"d597b1c7-2562-45a2-b301-14d0db548bc8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788526 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlkfg\" (UniqueName: \"kubernetes.io/projected/92112e1c-6b23-4d10-9f2b-0e33616c96f5-kube-api-access-qlkfg\") pod \"cluster-samples-operator-665b6dd947-g4r2s\" (UID: \"92112e1c-6b23-4d10-9f2b-0e33616c96f5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788548 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdjzt\" (UniqueName: \"kubernetes.io/projected/9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe-kube-api-access-kdjzt\") pod \"openshift-controller-manager-operator-756b6f6bc6-pqcsc\" (UID: \"9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788582 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ce863d-02cf-43c6-a249-bfef15cf04be-serving-cert\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788598 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pqcsc\" (UID: \"9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788624 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/60b0275a-57b6-482d-b046-ffd270801add-profile-collector-cert\") pod \"olm-operator-6b444d44fb-p5fjs\" (UID: \"60b0275a-57b6-482d-b046-ffd270801add\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788641 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-client-ca\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788663 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-default-certificate\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788679 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhqf4\" (UniqueName: \"kubernetes.io/projected/60b0275a-57b6-482d-b046-ffd270801add-kube-api-access-fhqf4\") pod \"olm-operator-6b444d44fb-p5fjs\" (UID: \"60b0275a-57b6-482d-b046-ffd270801add\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788709 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a86cb99d-3be8-4acb-98f7-87c5df66c339-proxy-tls\") pod \"machine-config-operator-74547568cd-hch5m\" (UID: \"a86cb99d-3be8-4acb-98f7-87c5df66c339\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788731 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87528d59-5bdb-4e92-8d6e-062005390f6f-metrics-tls\") pod \"dns-operator-744455d44c-xgqrp\" (UID: \"87528d59-5bdb-4e92-8d6e-062005390f6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-xgqrp" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788783 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcrb9\" (UniqueName: \"kubernetes.io/projected/87528d59-5bdb-4e92-8d6e-062005390f6f-kube-api-access-lcrb9\") pod \"dns-operator-744455d44c-xgqrp\" (UID: \"87528d59-5bdb-4e92-8d6e-062005390f6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-xgqrp" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788813 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.788945 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-config\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.789094 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/865ec974-02ed-4218-a599-cf69b6f0a538-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5bhlf\" (UID: \"865ec974-02ed-4218-a599-cf69b6f0a538\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.789589 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-client-ca\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.789627 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/863e130d-2f68-47ef-8b6c-2871d38a2282-config\") pod \"kube-apiserver-operator-766d6c64bb-mflcb\" (UID: \"863e130d-2f68-47ef-8b6c-2871d38a2282\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.789675 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pqcsc\" (UID: \"9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.790043 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/230baada-7ff6-4b95-b44f-b46e54fe1375-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-f9wvq\" (UID: \"230baada-7ff6-4b95-b44f-b46e54fe1375\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.790835 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pqcsc\" (UID: \"9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.791190 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.791999 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ce863d-02cf-43c6-a249-bfef15cf04be-serving-cert\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.792008 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79f19c84-0217-4b08-8b4d-663096ce67b4-serving-cert\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.792515 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/92112e1c-6b23-4d10-9f2b-0e33616c96f5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-g4r2s\" (UID: \"92112e1c-6b23-4d10-9f2b-0e33616c96f5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.792918 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/29ce863d-02cf-43c6-a249-bfef15cf04be-etcd-client\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.792960 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87528d59-5bdb-4e92-8d6e-062005390f6f-metrics-tls\") pod \"dns-operator-744455d44c-xgqrp\" (UID: \"87528d59-5bdb-4e92-8d6e-062005390f6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-xgqrp" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.794009 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/863e130d-2f68-47ef-8b6c-2871d38a2282-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mflcb\" (UID: \"863e130d-2f68-47ef-8b6c-2871d38a2282\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.800845 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 01 07:24:26 crc kubenswrapper[4835]: W0201 07:24:26.803514 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b3e26c6_a029_4767_b371_579d2c682296.slice/crio-16f3beecfd6587f4f93a6de28f874c2b04b37f7bab6f970a7e8163d9e1c9c34b WatchSource:0}: Error finding container 16f3beecfd6587f4f93a6de28f874c2b04b37f7bab6f970a7e8163d9e1c9c34b: Status 404 returned error can't find the container with id 16f3beecfd6587f4f93a6de28f874c2b04b37f7bab6f970a7e8163d9e1c9c34b Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.820518 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.828569 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29ce863d-02cf-43c6-a249-bfef15cf04be-config\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.839789 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.848771 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/29ce863d-02cf-43c6-a249-bfef15cf04be-etcd-ca\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.859049 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.868446 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/29ce863d-02cf-43c6-a249-bfef15cf04be-etcd-service-ca\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.880246 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.900290 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.921521 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.931280 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/865ec974-02ed-4218-a599-cf69b6f0a538-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5bhlf\" (UID: \"865ec974-02ed-4218-a599-cf69b6f0a538\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.939788 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.959050 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 01 07:24:26 crc kubenswrapper[4835]: I0201 07:24:26.999737 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qckj9\" (UniqueName: \"kubernetes.io/projected/46f4b60b-0076-4087-b541-4617c3752687-kube-api-access-qckj9\") pod \"route-controller-manager-6576b87f9c-2qjjt\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.023061 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.031286 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2chhv\" (UniqueName: \"kubernetes.io/projected/bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c-kube-api-access-2chhv\") pod \"apiserver-76f77b778f-bztv4\" (UID: \"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c\") " pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.040925 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.059704 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.075983 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.080520 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.101280 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.120431 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.140204 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.149650 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.160163 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.171807 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-metrics-certs\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.183832 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.193950 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-default-certificate\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.235617 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.235673 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.239139 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-service-ca-bundle\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.239705 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.245266 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-stats-auth\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.264195 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.280339 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.300519 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.314173 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" event={"ID":"5b3e26c6-a029-4767-b371-579d2c682296","Type":"ContainerStarted","Data":"c148ef3c7d3fd5fd5bb0f93108341f537087d34a5401d5d8334f9efa0fc966a6"} Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.314218 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" event={"ID":"5b3e26c6-a029-4767-b371-579d2c682296","Type":"ContainerStarted","Data":"16f3beecfd6587f4f93a6de28f874c2b04b37f7bab6f970a7e8163d9e1c9c34b"} Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.322263 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.339642 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.358627 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.369181 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bztv4"] Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.372471 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a08e2a1-3eff-4271-bfd3-e0366c8da3e0-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kp87b\" (UID: \"5a08e2a1-3eff-4271-bfd3-e0366c8da3e0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" Feb 01 07:24:27 crc kubenswrapper[4835]: W0201 07:24:27.374619 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbc68445_c2f0_43a6_a4f5_6ea9b4a37d1c.slice/crio-de74b446923947702a3ad65c60e77cb0f7508de27dc774fdcff1583059a317eb WatchSource:0}: Error finding container de74b446923947702a3ad65c60e77cb0f7508de27dc774fdcff1583059a317eb: Status 404 returned error can't find the container with id de74b446923947702a3ad65c60e77cb0f7508de27dc774fdcff1583059a317eb Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.381006 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.385110 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt"] Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.389604 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a08e2a1-3eff-4271-bfd3-e0366c8da3e0-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kp87b\" (UID: \"5a08e2a1-3eff-4271-bfd3-e0366c8da3e0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" Feb 01 07:24:27 crc kubenswrapper[4835]: W0201 07:24:27.391761 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46f4b60b_0076_4087_b541_4617c3752687.slice/crio-eaca48a7b94d929256f67ed77a297ce26bfbe10f609a2d3253d4e4ba2b33d879 WatchSource:0}: Error finding container eaca48a7b94d929256f67ed77a297ce26bfbe10f609a2d3253d4e4ba2b33d879: Status 404 returned error can't find the container with id eaca48a7b94d929256f67ed77a297ce26bfbe10f609a2d3253d4e4ba2b33d879 Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.399236 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.419768 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.439741 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.452925 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d597b1c7-2562-45a2-b301-14d0db548bc8-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nr86z\" (UID: \"d597b1c7-2562-45a2-b301-14d0db548bc8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.459717 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.480637 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.488455 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d597b1c7-2562-45a2-b301-14d0db548bc8-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nr86z\" (UID: \"d597b1c7-2562-45a2-b301-14d0db548bc8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.498732 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.519527 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.539595 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.559535 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.587971 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.598462 4835 request.go:700] Waited for 1.014041373s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.599647 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.620165 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.639370 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.650794 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/230baada-7ff6-4b95-b44f-b46e54fe1375-proxy-tls\") pod \"machine-config-controller-84d6567774-f9wvq\" (UID: \"230baada-7ff6-4b95-b44f-b46e54fe1375\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.659545 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.668453 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a86cb99d-3be8-4acb-98f7-87c5df66c339-images\") pod \"machine-config-operator-74547568cd-hch5m\" (UID: \"a86cb99d-3be8-4acb-98f7-87c5df66c339\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.681065 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.700282 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.722195 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a86cb99d-3be8-4acb-98f7-87c5df66c339-proxy-tls\") pod \"machine-config-operator-74547568cd-hch5m\" (UID: \"a86cb99d-3be8-4acb-98f7-87c5df66c339\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.738973 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.761977 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 01 07:24:27 crc kubenswrapper[4835]: E0201 07:24:27.787676 4835 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 01 07:24:27 crc kubenswrapper[4835]: E0201 07:24:27.787775 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60b0275a-57b6-482d-b046-ffd270801add-srv-cert podName:60b0275a-57b6-482d-b046-ffd270801add nodeName:}" failed. No retries permitted until 2026-02-01 07:24:28.287751502 +0000 UTC m=+141.408187956 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/60b0275a-57b6-482d-b046-ffd270801add-srv-cert") pod "olm-operator-6b444d44fb-p5fjs" (UID: "60b0275a-57b6-482d-b046-ffd270801add") : failed to sync secret cache: timed out waiting for the condition Feb 01 07:24:27 crc kubenswrapper[4835]: E0201 07:24:27.789099 4835 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Feb 01 07:24:27 crc kubenswrapper[4835]: E0201 07:24:27.789152 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60b0275a-57b6-482d-b046-ffd270801add-profile-collector-cert podName:60b0275a-57b6-482d-b046-ffd270801add nodeName:}" failed. No retries permitted until 2026-02-01 07:24:28.289138498 +0000 UTC m=+141.409574942 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/60b0275a-57b6-482d-b046-ffd270801add-profile-collector-cert") pod "olm-operator-6b444d44fb-p5fjs" (UID: "60b0275a-57b6-482d-b046-ffd270801add") : failed to sync secret cache: timed out waiting for the condition Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.799473 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.819877 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.840238 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.860977 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.880957 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.899929 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.920210 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.940372 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.960533 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 01 07:24:27 crc kubenswrapper[4835]: I0201 07:24:27.980669 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.000667 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.039384 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.047782 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfxnz\" (UniqueName: \"kubernetes.io/projected/19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1-kube-api-access-pfxnz\") pod \"openshift-apiserver-operator-796bbdcf4f-dj84j\" (UID: \"19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.059602 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.072192 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.114471 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjrz6\" (UniqueName: \"kubernetes.io/projected/cad3b595-c72f-49b8-92e0-932f9f591375-kube-api-access-bjrz6\") pod \"authentication-operator-69f744f599-x4ddr\" (UID: \"cad3b595-c72f-49b8-92e0-932f9f591375\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.140657 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td8z7\" (UniqueName: \"kubernetes.io/projected/9154a093-1841-44f5-a71d-e42f5c19dfba-kube-api-access-td8z7\") pod \"console-f9d7485db-8hgqx\" (UID: \"9154a093-1841-44f5-a71d-e42f5c19dfba\") " pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.150327 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76xnv\" (UniqueName: \"kubernetes.io/projected/fb0c8a64-40d8-4fff-8ca4-b573df90cd88-kube-api-access-76xnv\") pod \"console-operator-58897d9998-t4w45\" (UID: \"fb0c8a64-40d8-4fff-8ca4-b573df90cd88\") " pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.157921 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.164906 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.169552 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjc5q\" (UniqueName: \"kubernetes.io/projected/90833a57-ccdb-452f-b86a-7741f52c5a80-kube-api-access-bjc5q\") pod \"openshift-config-operator-7777fb866f-k4l2m\" (UID: \"90833a57-ccdb-452f-b86a-7741f52c5a80\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.183548 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.190780 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nptzx\" (UniqueName: \"kubernetes.io/projected/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-kube-api-access-nptzx\") pod \"oauth-openshift-558db77b4-tkff4\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.196717 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.199888 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.219965 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.232943 4835 csr.go:261] certificate signing request csr-8s6rb is approved, waiting to be issued Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.242205 4835 csr.go:257] certificate signing request csr-8s6rb is issued Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.252843 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.261299 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.280807 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.300017 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.318990 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.324169 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j"] Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.324581 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" event={"ID":"46f4b60b-0076-4087-b541-4617c3752687","Type":"ContainerStarted","Data":"d75057a652ecc6476d8972aeed2313397cacadfb1acde29b6fc5f478793bb81c"} Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.324611 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" event={"ID":"46f4b60b-0076-4087-b541-4617c3752687","Type":"ContainerStarted","Data":"eaca48a7b94d929256f67ed77a297ce26bfbe10f609a2d3253d4e4ba2b33d879"} Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.325032 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.326229 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" event={"ID":"5b3e26c6-a029-4767-b371-579d2c682296","Type":"ContainerStarted","Data":"4cb1d35e6c7f1e7a19ba678cd0e9b0a10806a65e7550f712108a7bad7aea1c82"} Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.328793 4835 generic.go:334] "Generic (PLEG): container finished" podID="bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c" containerID="2901b34895fd28d9896d7002e8236e953069596c04847e31beb93b86309f900c" exitCode=0 Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.328823 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bztv4" event={"ID":"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c","Type":"ContainerDied","Data":"2901b34895fd28d9896d7002e8236e953069596c04847e31beb93b86309f900c"} Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.328841 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bztv4" event={"ID":"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c","Type":"ContainerStarted","Data":"de74b446923947702a3ad65c60e77cb0f7508de27dc774fdcff1583059a317eb"} Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.339318 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.348545 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/60b0275a-57b6-482d-b046-ffd270801add-srv-cert\") pod \"olm-operator-6b444d44fb-p5fjs\" (UID: \"60b0275a-57b6-482d-b046-ffd270801add\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.350599 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/60b0275a-57b6-482d-b046-ffd270801add-profile-collector-cert\") pod \"olm-operator-6b444d44fb-p5fjs\" (UID: \"60b0275a-57b6-482d-b046-ffd270801add\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.354019 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/60b0275a-57b6-482d-b046-ffd270801add-srv-cert\") pod \"olm-operator-6b444d44fb-p5fjs\" (UID: \"60b0275a-57b6-482d-b046-ffd270801add\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.354500 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/60b0275a-57b6-482d-b046-ffd270801add-profile-collector-cert\") pod \"olm-operator-6b444d44fb-p5fjs\" (UID: \"60b0275a-57b6-482d-b046-ffd270801add\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.359866 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.379622 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.391085 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-x4ddr"] Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.399867 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 01 07:24:28 crc kubenswrapper[4835]: W0201 07:24:28.406491 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcad3b595_c72f_49b8_92e0_932f9f591375.slice/crio-2271121ff70fd4f78d2c75e5e8c785b61155e20a1253d6b08d8d07df78c9f569 WatchSource:0}: Error finding container 2271121ff70fd4f78d2c75e5e8c785b61155e20a1253d6b08d8d07df78c9f569: Status 404 returned error can't find the container with id 2271121ff70fd4f78d2c75e5e8c785b61155e20a1253d6b08d8d07df78c9f569 Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.414126 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.437012 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5rsk\" (UniqueName: \"kubernetes.io/projected/03f29b26-d2bd-48e2-9804-c90a5315658c-kube-api-access-m5rsk\") pod \"apiserver-7bbb656c7d-j9pkf\" (UID: \"03f29b26-d2bd-48e2-9804-c90a5315658c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.447620 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.454212 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94wjd\" (UniqueName: \"kubernetes.io/projected/8924e4db-3c47-4e66-90d1-e74e49f3a65d-kube-api-access-94wjd\") pod \"machine-api-operator-5694c8668f-whqd4\" (UID: \"8924e4db-3c47-4e66-90d1-e74e49f3a65d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.459194 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.479621 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.499193 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.522047 4835 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.522397 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.541198 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.560728 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.596073 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.598607 4835 request.go:700] Waited for 1.858201758s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&limit=500&resourceVersion=0 Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.599772 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.621500 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.629048 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m"] Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.639563 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8hgqx"] Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.639783 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 01 07:24:28 crc kubenswrapper[4835]: W0201 07:24:28.645119 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90833a57_ccdb_452f_b86a_7741f52c5a80.slice/crio-86004b2c0e2950bc7fcf4234811311c6c60c4eb8bbeb3a5fbbf9c12d3ebba80e WatchSource:0}: Error finding container 86004b2c0e2950bc7fcf4234811311c6c60c4eb8bbeb3a5fbbf9c12d3ebba80e: Status 404 returned error can't find the container with id 86004b2c0e2950bc7fcf4234811311c6c60c4eb8bbeb3a5fbbf9c12d3ebba80e Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.654200 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.656219 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tkff4"] Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.660199 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 01 07:24:28 crc kubenswrapper[4835]: W0201 07:24:28.677123 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62724c3f_5c92_4e77_ba3a_0f6b7215f48a.slice/crio-b228e669bd5b200a2abbd929c9ec6fc4843ea07663488a746bc7f94dc855f949 WatchSource:0}: Error finding container b228e669bd5b200a2abbd929c9ec6fc4843ea07663488a746bc7f94dc855f949: Status 404 returned error can't find the container with id b228e669bd5b200a2abbd929c9ec6fc4843ea07663488a746bc7f94dc855f949 Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.678802 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.700194 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.712952 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-t4w45"] Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.724876 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.733515 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/865ec974-02ed-4218-a599-cf69b6f0a538-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5bhlf\" (UID: \"865ec974-02ed-4218-a599-cf69b6f0a538\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.757898 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a08e2a1-3eff-4271-bfd3-e0366c8da3e0-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-kp87b\" (UID: \"5a08e2a1-3eff-4271-bfd3-e0366c8da3e0\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.774026 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4j9d\" (UniqueName: \"kubernetes.io/projected/29ce863d-02cf-43c6-a249-bfef15cf04be-kube-api-access-b4j9d\") pod \"etcd-operator-b45778765-zq4gf\" (UID: \"29ce863d-02cf-43c6-a249-bfef15cf04be\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.794151 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfrgj\" (UniqueName: \"kubernetes.io/projected/865ec974-02ed-4218-a599-cf69b6f0a538-kube-api-access-vfrgj\") pod \"cluster-image-registry-operator-dc59b4c8b-5bhlf\" (UID: \"865ec974-02ed-4218-a599-cf69b6f0a538\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.813605 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tjgn\" (UniqueName: \"kubernetes.io/projected/a86cb99d-3be8-4acb-98f7-87c5df66c339-kube-api-access-2tjgn\") pod \"machine-config-operator-74547568cd-hch5m\" (UID: \"a86cb99d-3be8-4acb-98f7-87c5df66c339\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.834028 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttz52\" (UniqueName: \"kubernetes.io/projected/79f19c84-0217-4b08-8b4d-663096ce67b4-kube-api-access-ttz52\") pod \"controller-manager-879f6c89f-hpgql\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.854006 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-whqd4"] Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.857124 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.861120 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqdn6\" (UniqueName: \"kubernetes.io/projected/230baada-7ff6-4b95-b44f-b46e54fe1375-kube-api-access-sqdn6\") pod \"machine-config-controller-84d6567774-f9wvq\" (UID: \"230baada-7ff6-4b95-b44f-b46e54fe1375\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.863270 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.880972 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz2xl\" (UniqueName: \"kubernetes.io/projected/6f01f600-cee2-4257-9c5f-a0b7edcd7a9d-kube-api-access-qz2xl\") pod \"router-default-5444994796-sdz4h\" (UID: \"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d\") " pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.892028 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.906847 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.919200 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/863e130d-2f68-47ef-8b6c-2871d38a2282-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mflcb\" (UID: \"863e130d-2f68-47ef-8b6c-2871d38a2282\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.922602 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtlhj\" (UniqueName: \"kubernetes.io/projected/d597b1c7-2562-45a2-b301-14d0db548bc8-kube-api-access-xtlhj\") pod \"kube-storage-version-migrator-operator-b67b599dd-nr86z\" (UID: \"d597b1c7-2562-45a2-b301-14d0db548bc8\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.933841 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.934270 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg2rk\" (UniqueName: \"kubernetes.io/projected/8589782d-8533-4419-b9bf-115446144a39-kube-api-access-gg2rk\") pod \"migrator-59844c95c7-7nw98\" (UID: \"8589782d-8533-4419-b9bf-115446144a39\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7nw98" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.937599 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.959608 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcrb9\" (UniqueName: \"kubernetes.io/projected/87528d59-5bdb-4e92-8d6e-062005390f6f-kube-api-access-lcrb9\") pod \"dns-operator-744455d44c-xgqrp\" (UID: \"87528d59-5bdb-4e92-8d6e-062005390f6f\") " pod="openshift-dns-operator/dns-operator-744455d44c-xgqrp" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.965968 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf"] Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.976355 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlkfg\" (UniqueName: \"kubernetes.io/projected/92112e1c-6b23-4d10-9f2b-0e33616c96f5-kube-api-access-qlkfg\") pod \"cluster-samples-operator-665b6dd947-g4r2s\" (UID: \"92112e1c-6b23-4d10-9f2b-0e33616c96f5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s" Feb 01 07:24:28 crc kubenswrapper[4835]: I0201 07:24:28.996711 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdjzt\" (UniqueName: \"kubernetes.io/projected/9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe-kube-api-access-kdjzt\") pod \"openshift-controller-manager-operator-756b6f6bc6-pqcsc\" (UID: \"9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.023193 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhqf4\" (UniqueName: \"kubernetes.io/projected/60b0275a-57b6-482d-b046-ffd270801add-kube-api-access-fhqf4\") pod \"olm-operator-6b444d44fb-p5fjs\" (UID: \"60b0275a-57b6-482d-b046-ffd270801add\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.072268 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8q47\" (UniqueName: \"kubernetes.io/projected/79c369eb-e17d-4a32-9167-934aa23fd4fc-kube-api-access-v8q47\") pod \"downloads-7954f5f757-k8v8n\" (UID: \"79c369eb-e17d-4a32-9167-934aa23fd4fc\") " pod="openshift-console/downloads-7954f5f757-k8v8n" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.072607 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-84rg2\" (UID: \"a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.072634 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ac521dca-2154-40bb-bbdb-a22e3d6abd72-installation-pull-secrets\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.072653 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-registry-tls\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073026 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-84rg2\" (UID: \"a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073077 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ac521dca-2154-40bb-bbdb-a22e3d6abd72-registry-certificates\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073095 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b9309ebb-034c-47a1-9328-62fda6feabbd-metrics-tls\") pod \"ingress-operator-5b745b69d9-dk9xj\" (UID: \"b9309ebb-034c-47a1-9328-62fda6feabbd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073142 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2-config\") pod \"kube-controller-manager-operator-78b949d7b-84rg2\" (UID: \"a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073169 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-bound-sa-token\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073200 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b4ls\" (UniqueName: \"kubernetes.io/projected/1d5a72cc-b727-4dcf-85cd-d039dc785b65-kube-api-access-7b4ls\") pod \"multus-admission-controller-857f4d67dd-fbdw8\" (UID: \"1d5a72cc-b727-4dcf-85cd-d039dc785b65\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fbdw8" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073235 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnj8w\" (UniqueName: \"kubernetes.io/projected/a67dd2fd-8463-4887-94b7-405df03c5c0a-kube-api-access-hnj8w\") pod \"control-plane-machine-set-operator-78cbb6b69f-ngjw6\" (UID: \"a67dd2fd-8463-4887-94b7-405df03c5c0a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073258 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1d5a72cc-b727-4dcf-85cd-d039dc785b65-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fbdw8\" (UID: \"1d5a72cc-b727-4dcf-85cd-d039dc785b65\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fbdw8" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073277 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7bnj\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-kube-api-access-w7bnj\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073320 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a67dd2fd-8463-4887-94b7-405df03c5c0a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-ngjw6\" (UID: \"a67dd2fd-8463-4887-94b7-405df03c5c0a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073354 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b9309ebb-034c-47a1-9328-62fda6feabbd-trusted-ca\") pod \"ingress-operator-5b745b69d9-dk9xj\" (UID: \"b9309ebb-034c-47a1-9328-62fda6feabbd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073453 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmj27\" (UniqueName: \"kubernetes.io/projected/b9309ebb-034c-47a1-9328-62fda6feabbd-kube-api-access-lmj27\") pod \"ingress-operator-5b745b69d9-dk9xj\" (UID: \"b9309ebb-034c-47a1-9328-62fda6feabbd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073514 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073532 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ac521dca-2154-40bb-bbdb-a22e3d6abd72-ca-trust-extracted\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073553 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac521dca-2154-40bb-bbdb-a22e3d6abd72-trusted-ca\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.073606 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b9309ebb-034c-47a1-9328-62fda6feabbd-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dk9xj\" (UID: \"b9309ebb-034c-47a1-9328-62fda6feabbd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:29 crc kubenswrapper[4835]: E0201 07:24:29.074182 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:29.574165396 +0000 UTC m=+142.694601840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.077690 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zq4gf"] Feb 01 07:24:29 crc kubenswrapper[4835]: W0201 07:24:29.094938 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29ce863d_02cf_43c6_a249_bfef15cf04be.slice/crio-ed5e74bf81ffed845454cbb65c9397567d1c1161ae07f413f27a6ca69f988c8c WatchSource:0}: Error finding container ed5e74bf81ffed845454cbb65c9397567d1c1161ae07f413f27a6ca69f988c8c: Status 404 returned error can't find the container with id ed5e74bf81ffed845454cbb65c9397567d1c1161ae07f413f27a6ca69f988c8c Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.111100 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.124535 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.139503 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.146290 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-xgqrp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.150114 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.175536 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.175899 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ac521dca-2154-40bb-bbdb-a22e3d6abd72-installation-pull-secrets\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.175940 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-registry-tls\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.175965 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swnqf\" (UniqueName: \"kubernetes.io/projected/8fa1edf3-e0a6-4d1a-aa61-172397ca736b-kube-api-access-swnqf\") pod \"package-server-manager-789f6589d5-9t7c7\" (UID: \"8fa1edf3-e0a6-4d1a-aa61-172397ca736b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.175987 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6fa37cd2-a8e5-4624-91e2-6d249bdb7c87-srv-cert\") pod \"catalog-operator-68c6474976-7ngw7\" (UID: \"6fa37cd2-a8e5-4624-91e2-6d249bdb7c87\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176068 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-socket-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176101 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-csi-data-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176176 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6fa37cd2-a8e5-4624-91e2-6d249bdb7c87-profile-collector-cert\") pod \"catalog-operator-68c6474976-7ngw7\" (UID: \"6fa37cd2-a8e5-4624-91e2-6d249bdb7c87\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176215 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-84rg2\" (UID: \"a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176236 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8fa1edf3-e0a6-4d1a-aa61-172397ca736b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9t7c7\" (UID: \"8fa1edf3-e0a6-4d1a-aa61-172397ca736b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176328 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/137b200e-5dcd-43c9-82e2-332071d84cb0-secret-volume\") pod \"collect-profiles-29498835-zbz9x\" (UID: \"137b200e-5dcd-43c9-82e2-332071d84cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176401 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ac521dca-2154-40bb-bbdb-a22e3d6abd72-registry-certificates\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176547 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b9309ebb-034c-47a1-9328-62fda6feabbd-metrics-tls\") pod \"ingress-operator-5b745b69d9-dk9xj\" (UID: \"b9309ebb-034c-47a1-9328-62fda6feabbd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176626 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2708d65e-6013-4f55-9492-3a3ec5529d9b-signing-cabundle\") pod \"service-ca-9c57cc56f-4qc29\" (UID: \"2708d65e-6013-4f55-9492-3a3ec5529d9b\") " pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176683 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2-config\") pod \"kube-controller-manager-operator-78b949d7b-84rg2\" (UID: \"a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176727 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-bound-sa-token\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176753 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8615180e-fc31-41b2-ad59-5ae2e48af5a2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mjg6g\" (UID: \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176826 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l9fw\" (UniqueName: \"kubernetes.io/projected/d7c5983d-0780-410d-a88b-06063e0853c1-kube-api-access-7l9fw\") pod \"machine-config-server-2vc59\" (UID: \"d7c5983d-0780-410d-a88b-06063e0853c1\") " pod="openshift-machine-config-operator/machine-config-server-2vc59" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176876 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b4ls\" (UniqueName: \"kubernetes.io/projected/1d5a72cc-b727-4dcf-85cd-d039dc785b65-kube-api-access-7b4ls\") pod \"multus-admission-controller-857f4d67dd-fbdw8\" (UID: \"1d5a72cc-b727-4dcf-85cd-d039dc785b65\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fbdw8" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176899 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8615180e-fc31-41b2-ad59-5ae2e48af5a2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mjg6g\" (UID: \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176947 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnj8w\" (UniqueName: \"kubernetes.io/projected/a67dd2fd-8463-4887-94b7-405df03c5c0a-kube-api-access-hnj8w\") pod \"control-plane-machine-set-operator-78cbb6b69f-ngjw6\" (UID: \"a67dd2fd-8463-4887-94b7-405df03c5c0a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.176996 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1d5a72cc-b727-4dcf-85cd-d039dc785b65-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fbdw8\" (UID: \"1d5a72cc-b727-4dcf-85cd-d039dc785b65\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fbdw8" Feb 01 07:24:29 crc kubenswrapper[4835]: E0201 07:24:29.177066 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:29.677046008 +0000 UTC m=+142.797482442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177094 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7bnj\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-kube-api-access-w7bnj\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177122 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a67dd2fd-8463-4887-94b7-405df03c5c0a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-ngjw6\" (UID: \"a67dd2fd-8463-4887-94b7-405df03c5c0a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177148 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/137b200e-5dcd-43c9-82e2-332071d84cb0-config-volume\") pod \"collect-profiles-29498835-zbz9x\" (UID: \"137b200e-5dcd-43c9-82e2-332071d84cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177183 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b9309ebb-034c-47a1-9328-62fda6feabbd-trusted-ca\") pod \"ingress-operator-5b745b69d9-dk9xj\" (UID: \"b9309ebb-034c-47a1-9328-62fda6feabbd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177205 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghbdt\" (UniqueName: \"kubernetes.io/projected/6fa37cd2-a8e5-4624-91e2-6d249bdb7c87-kube-api-access-ghbdt\") pod \"catalog-operator-68c6474976-7ngw7\" (UID: \"6fa37cd2-a8e5-4624-91e2-6d249bdb7c87\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177257 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m47vr\" (UniqueName: \"kubernetes.io/projected/ac6d201a-b05d-47ab-b71f-0859b88f0024-kube-api-access-m47vr\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177433 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmj27\" (UniqueName: \"kubernetes.io/projected/b9309ebb-034c-47a1-9328-62fda6feabbd-kube-api-access-lmj27\") pod \"ingress-operator-5b745b69d9-dk9xj\" (UID: \"b9309ebb-034c-47a1-9328-62fda6feabbd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177489 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177513 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ac521dca-2154-40bb-bbdb-a22e3d6abd72-ca-trust-extracted\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177532 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1adf70cf-02dc-4c30-9c35-6507314a4fa8-apiservice-cert\") pod \"packageserver-d55dfcdfc-q45cc\" (UID: \"1adf70cf-02dc-4c30-9c35-6507314a4fa8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177618 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-mountpoint-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177642 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9db65efb-d721-45dc-87a6-6ef40be6789d-metrics-tls\") pod \"dns-default-gmr7g\" (UID: \"9db65efb-d721-45dc-87a6-6ef40be6789d\") " pod="openshift-dns/dns-default-gmr7g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177714 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac521dca-2154-40bb-bbdb-a22e3d6abd72-trusted-ca\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177740 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd7rd\" (UniqueName: \"kubernetes.io/projected/d18912d2-49bb-4779-9b02-fc9707e55b38-kube-api-access-bd7rd\") pod \"ingress-canary-shvm4\" (UID: \"d18912d2-49bb-4779-9b02-fc9707e55b38\") " pod="openshift-ingress-canary/ingress-canary-shvm4" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177762 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjnzv\" (UniqueName: \"kubernetes.io/projected/1adf70cf-02dc-4c30-9c35-6507314a4fa8-kube-api-access-kjnzv\") pod \"packageserver-d55dfcdfc-q45cc\" (UID: \"1adf70cf-02dc-4c30-9c35-6507314a4fa8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177791 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9bww\" (UniqueName: \"kubernetes.io/projected/9db65efb-d721-45dc-87a6-6ef40be6789d-kube-api-access-v9bww\") pod \"dns-default-gmr7g\" (UID: \"9db65efb-d721-45dc-87a6-6ef40be6789d\") " pod="openshift-dns/dns-default-gmr7g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177812 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxn9g\" (UniqueName: \"kubernetes.io/projected/889e5fa5-6b80-4bc3-b19b-0d3621f7fceb-kube-api-access-jxn9g\") pod \"service-ca-operator-777779d784-2cpj2\" (UID: \"889e5fa5-6b80-4bc3-b19b-0d3621f7fceb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177862 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d18912d2-49bb-4779-9b02-fc9707e55b38-cert\") pod \"ingress-canary-shvm4\" (UID: \"d18912d2-49bb-4779-9b02-fc9707e55b38\") " pod="openshift-ingress-canary/ingress-canary-shvm4" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177880 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1adf70cf-02dc-4c30-9c35-6507314a4fa8-webhook-cert\") pod \"packageserver-d55dfcdfc-q45cc\" (UID: \"1adf70cf-02dc-4c30-9c35-6507314a4fa8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177900 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d7c5983d-0780-410d-a88b-06063e0853c1-certs\") pod \"machine-config-server-2vc59\" (UID: \"d7c5983d-0780-410d-a88b-06063e0853c1\") " pod="openshift-machine-config-operator/machine-config-server-2vc59" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177935 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49g4h\" (UniqueName: \"kubernetes.io/projected/137b200e-5dcd-43c9-82e2-332071d84cb0-kube-api-access-49g4h\") pod \"collect-profiles-29498835-zbz9x\" (UID: \"137b200e-5dcd-43c9-82e2-332071d84cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177969 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/889e5fa5-6b80-4bc3-b19b-0d3621f7fceb-serving-cert\") pod \"service-ca-operator-777779d784-2cpj2\" (UID: \"889e5fa5-6b80-4bc3-b19b-0d3621f7fceb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.177990 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b9309ebb-034c-47a1-9328-62fda6feabbd-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dk9xj\" (UID: \"b9309ebb-034c-47a1-9328-62fda6feabbd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.178010 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhft7\" (UniqueName: \"kubernetes.io/projected/8615180e-fc31-41b2-ad59-5ae2e48af5a2-kube-api-access-jhft7\") pod \"marketplace-operator-79b997595-mjg6g\" (UID: \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.178026 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d7c5983d-0780-410d-a88b-06063e0853c1-node-bootstrap-token\") pod \"machine-config-server-2vc59\" (UID: \"d7c5983d-0780-410d-a88b-06063e0853c1\") " pod="openshift-machine-config-operator/machine-config-server-2vc59" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.178073 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9db65efb-d721-45dc-87a6-6ef40be6789d-config-volume\") pod \"dns-default-gmr7g\" (UID: \"9db65efb-d721-45dc-87a6-6ef40be6789d\") " pod="openshift-dns/dns-default-gmr7g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.178092 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-registration-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.178138 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1adf70cf-02dc-4c30-9c35-6507314a4fa8-tmpfs\") pod \"packageserver-d55dfcdfc-q45cc\" (UID: \"1adf70cf-02dc-4c30-9c35-6507314a4fa8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.178154 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-plugins-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.178179 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8q47\" (UniqueName: \"kubernetes.io/projected/79c369eb-e17d-4a32-9167-934aa23fd4fc-kube-api-access-v8q47\") pod \"downloads-7954f5f757-k8v8n\" (UID: \"79c369eb-e17d-4a32-9167-934aa23fd4fc\") " pod="openshift-console/downloads-7954f5f757-k8v8n" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.178214 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/889e5fa5-6b80-4bc3-b19b-0d3621f7fceb-config\") pod \"service-ca-operator-777779d784-2cpj2\" (UID: \"889e5fa5-6b80-4bc3-b19b-0d3621f7fceb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.178246 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2708d65e-6013-4f55-9492-3a3ec5529d9b-signing-key\") pod \"service-ca-9c57cc56f-4qc29\" (UID: \"2708d65e-6013-4f55-9492-3a3ec5529d9b\") " pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.178332 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-84rg2\" (UID: \"a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.178374 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5rdt\" (UniqueName: \"kubernetes.io/projected/2708d65e-6013-4f55-9492-3a3ec5529d9b-kube-api-access-c5rdt\") pod \"service-ca-9c57cc56f-4qc29\" (UID: \"2708d65e-6013-4f55-9492-3a3ec5529d9b\") " pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.185010 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ac521dca-2154-40bb-bbdb-a22e3d6abd72-registry-certificates\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.191399 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2-config\") pod \"kube-controller-manager-operator-78b949d7b-84rg2\" (UID: \"a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.191934 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf"] Feb 01 07:24:29 crc kubenswrapper[4835]: E0201 07:24:29.195432 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:29.695391692 +0000 UTC m=+142.815828126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.196579 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ac521dca-2154-40bb-bbdb-a22e3d6abd72-ca-trust-extracted\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.197222 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b9309ebb-034c-47a1-9328-62fda6feabbd-trusted-ca\") pod \"ingress-operator-5b745b69d9-dk9xj\" (UID: \"b9309ebb-034c-47a1-9328-62fda6feabbd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.213636 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac521dca-2154-40bb-bbdb-a22e3d6abd72-trusted-ca\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.232975 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1d5a72cc-b727-4dcf-85cd-d039dc785b65-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fbdw8\" (UID: \"1d5a72cc-b727-4dcf-85cd-d039dc785b65\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fbdw8" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.233300 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b9309ebb-034c-47a1-9328-62fda6feabbd-metrics-tls\") pod \"ingress-operator-5b745b69d9-dk9xj\" (UID: \"b9309ebb-034c-47a1-9328-62fda6feabbd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.235575 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-registry-tls\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.238614 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b"] Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.245279 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7bnj\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-kube-api-access-w7bnj\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.246356 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7nw98" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.247342 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.247701 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-01 07:19:28 +0000 UTC, rotation deadline is 2026-10-28 22:07:43.320729815 +0000 UTC Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.247740 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6470h43m14.072992533s for next certificate rotation Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.257032 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-84rg2\" (UID: \"a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.257299 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ac521dca-2154-40bb-bbdb-a22e3d6abd72-installation-pull-secrets\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.257948 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.263648 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a67dd2fd-8463-4887-94b7-405df03c5c0a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-ngjw6\" (UID: \"a67dd2fd-8463-4887-94b7-405df03c5c0a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.265970 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-bound-sa-token\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.281853 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282063 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxn9g\" (UniqueName: \"kubernetes.io/projected/889e5fa5-6b80-4bc3-b19b-0d3621f7fceb-kube-api-access-jxn9g\") pod \"service-ca-operator-777779d784-2cpj2\" (UID: \"889e5fa5-6b80-4bc3-b19b-0d3621f7fceb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282094 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d18912d2-49bb-4779-9b02-fc9707e55b38-cert\") pod \"ingress-canary-shvm4\" (UID: \"d18912d2-49bb-4779-9b02-fc9707e55b38\") " pod="openshift-ingress-canary/ingress-canary-shvm4" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282117 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1adf70cf-02dc-4c30-9c35-6507314a4fa8-webhook-cert\") pod \"packageserver-d55dfcdfc-q45cc\" (UID: \"1adf70cf-02dc-4c30-9c35-6507314a4fa8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282141 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d7c5983d-0780-410d-a88b-06063e0853c1-certs\") pod \"machine-config-server-2vc59\" (UID: \"d7c5983d-0780-410d-a88b-06063e0853c1\") " pod="openshift-machine-config-operator/machine-config-server-2vc59" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282162 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49g4h\" (UniqueName: \"kubernetes.io/projected/137b200e-5dcd-43c9-82e2-332071d84cb0-kube-api-access-49g4h\") pod \"collect-profiles-29498835-zbz9x\" (UID: \"137b200e-5dcd-43c9-82e2-332071d84cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282182 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/889e5fa5-6b80-4bc3-b19b-0d3621f7fceb-serving-cert\") pod \"service-ca-operator-777779d784-2cpj2\" (UID: \"889e5fa5-6b80-4bc3-b19b-0d3621f7fceb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282209 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhft7\" (UniqueName: \"kubernetes.io/projected/8615180e-fc31-41b2-ad59-5ae2e48af5a2-kube-api-access-jhft7\") pod \"marketplace-operator-79b997595-mjg6g\" (UID: \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282230 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d7c5983d-0780-410d-a88b-06063e0853c1-node-bootstrap-token\") pod \"machine-config-server-2vc59\" (UID: \"d7c5983d-0780-410d-a88b-06063e0853c1\") " pod="openshift-machine-config-operator/machine-config-server-2vc59" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282248 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9db65efb-d721-45dc-87a6-6ef40be6789d-config-volume\") pod \"dns-default-gmr7g\" (UID: \"9db65efb-d721-45dc-87a6-6ef40be6789d\") " pod="openshift-dns/dns-default-gmr7g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282275 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-registration-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282294 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1adf70cf-02dc-4c30-9c35-6507314a4fa8-tmpfs\") pod \"packageserver-d55dfcdfc-q45cc\" (UID: \"1adf70cf-02dc-4c30-9c35-6507314a4fa8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282313 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-plugins-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282343 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/889e5fa5-6b80-4bc3-b19b-0d3621f7fceb-config\") pod \"service-ca-operator-777779d784-2cpj2\" (UID: \"889e5fa5-6b80-4bc3-b19b-0d3621f7fceb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282363 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2708d65e-6013-4f55-9492-3a3ec5529d9b-signing-key\") pod \"service-ca-9c57cc56f-4qc29\" (UID: \"2708d65e-6013-4f55-9492-3a3ec5529d9b\") " pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282395 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5rdt\" (UniqueName: \"kubernetes.io/projected/2708d65e-6013-4f55-9492-3a3ec5529d9b-kube-api-access-c5rdt\") pod \"service-ca-9c57cc56f-4qc29\" (UID: \"2708d65e-6013-4f55-9492-3a3ec5529d9b\") " pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282464 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swnqf\" (UniqueName: \"kubernetes.io/projected/8fa1edf3-e0a6-4d1a-aa61-172397ca736b-kube-api-access-swnqf\") pod \"package-server-manager-789f6589d5-9t7c7\" (UID: \"8fa1edf3-e0a6-4d1a-aa61-172397ca736b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282497 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6fa37cd2-a8e5-4624-91e2-6d249bdb7c87-srv-cert\") pod \"catalog-operator-68c6474976-7ngw7\" (UID: \"6fa37cd2-a8e5-4624-91e2-6d249bdb7c87\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282523 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-socket-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282544 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-csi-data-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282566 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6fa37cd2-a8e5-4624-91e2-6d249bdb7c87-profile-collector-cert\") pod \"catalog-operator-68c6474976-7ngw7\" (UID: \"6fa37cd2-a8e5-4624-91e2-6d249bdb7c87\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282589 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8fa1edf3-e0a6-4d1a-aa61-172397ca736b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9t7c7\" (UID: \"8fa1edf3-e0a6-4d1a-aa61-172397ca736b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282622 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/137b200e-5dcd-43c9-82e2-332071d84cb0-secret-volume\") pod \"collect-profiles-29498835-zbz9x\" (UID: \"137b200e-5dcd-43c9-82e2-332071d84cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282646 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2708d65e-6013-4f55-9492-3a3ec5529d9b-signing-cabundle\") pod \"service-ca-9c57cc56f-4qc29\" (UID: \"2708d65e-6013-4f55-9492-3a3ec5529d9b\") " pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282683 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8615180e-fc31-41b2-ad59-5ae2e48af5a2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mjg6g\" (UID: \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282710 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l9fw\" (UniqueName: \"kubernetes.io/projected/d7c5983d-0780-410d-a88b-06063e0853c1-kube-api-access-7l9fw\") pod \"machine-config-server-2vc59\" (UID: \"d7c5983d-0780-410d-a88b-06063e0853c1\") " pod="openshift-machine-config-operator/machine-config-server-2vc59" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282736 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8615180e-fc31-41b2-ad59-5ae2e48af5a2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mjg6g\" (UID: \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282770 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/137b200e-5dcd-43c9-82e2-332071d84cb0-config-volume\") pod \"collect-profiles-29498835-zbz9x\" (UID: \"137b200e-5dcd-43c9-82e2-332071d84cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282802 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m47vr\" (UniqueName: \"kubernetes.io/projected/ac6d201a-b05d-47ab-b71f-0859b88f0024-kube-api-access-m47vr\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.282822 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghbdt\" (UniqueName: \"kubernetes.io/projected/6fa37cd2-a8e5-4624-91e2-6d249bdb7c87-kube-api-access-ghbdt\") pod \"catalog-operator-68c6474976-7ngw7\" (UID: \"6fa37cd2-a8e5-4624-91e2-6d249bdb7c87\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" Feb 01 07:24:29 crc kubenswrapper[4835]: E0201 07:24:29.282973 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:29.782822637 +0000 UTC m=+142.903259071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.283051 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.283076 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1adf70cf-02dc-4c30-9c35-6507314a4fa8-apiservice-cert\") pod \"packageserver-d55dfcdfc-q45cc\" (UID: \"1adf70cf-02dc-4c30-9c35-6507314a4fa8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.283101 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd7rd\" (UniqueName: \"kubernetes.io/projected/d18912d2-49bb-4779-9b02-fc9707e55b38-kube-api-access-bd7rd\") pod \"ingress-canary-shvm4\" (UID: \"d18912d2-49bb-4779-9b02-fc9707e55b38\") " pod="openshift-ingress-canary/ingress-canary-shvm4" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.283118 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjnzv\" (UniqueName: \"kubernetes.io/projected/1adf70cf-02dc-4c30-9c35-6507314a4fa8-kube-api-access-kjnzv\") pod \"packageserver-d55dfcdfc-q45cc\" (UID: \"1adf70cf-02dc-4c30-9c35-6507314a4fa8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.283148 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-mountpoint-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.283167 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9db65efb-d721-45dc-87a6-6ef40be6789d-metrics-tls\") pod \"dns-default-gmr7g\" (UID: \"9db65efb-d721-45dc-87a6-6ef40be6789d\") " pod="openshift-dns/dns-default-gmr7g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.283201 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9bww\" (UniqueName: \"kubernetes.io/projected/9db65efb-d721-45dc-87a6-6ef40be6789d-kube-api-access-v9bww\") pod \"dns-default-gmr7g\" (UID: \"9db65efb-d721-45dc-87a6-6ef40be6789d\") " pod="openshift-dns/dns-default-gmr7g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.283423 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-socket-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: E0201 07:24:29.291303 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:29.79128897 +0000 UTC m=+142.911725404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.292429 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmj27\" (UniqueName: \"kubernetes.io/projected/b9309ebb-034c-47a1-9328-62fda6feabbd-kube-api-access-lmj27\") pod \"ingress-operator-5b745b69d9-dk9xj\" (UID: \"b9309ebb-034c-47a1-9328-62fda6feabbd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.294345 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9db65efb-d721-45dc-87a6-6ef40be6789d-config-volume\") pod \"dns-default-gmr7g\" (UID: \"9db65efb-d721-45dc-87a6-6ef40be6789d\") " pod="openshift-dns/dns-default-gmr7g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.294497 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-registration-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.294879 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1adf70cf-02dc-4c30-9c35-6507314a4fa8-tmpfs\") pod \"packageserver-d55dfcdfc-q45cc\" (UID: \"1adf70cf-02dc-4c30-9c35-6507314a4fa8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.294933 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-plugins-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.296235 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2708d65e-6013-4f55-9492-3a3ec5529d9b-signing-cabundle\") pod \"service-ca-9c57cc56f-4qc29\" (UID: \"2708d65e-6013-4f55-9492-3a3ec5529d9b\") " pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.297442 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8q47\" (UniqueName: \"kubernetes.io/projected/79c369eb-e17d-4a32-9167-934aa23fd4fc-kube-api-access-v8q47\") pod \"downloads-7954f5f757-k8v8n\" (UID: \"79c369eb-e17d-4a32-9167-934aa23fd4fc\") " pod="openshift-console/downloads-7954f5f757-k8v8n" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.297598 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d18912d2-49bb-4779-9b02-fc9707e55b38-cert\") pod \"ingress-canary-shvm4\" (UID: \"d18912d2-49bb-4779-9b02-fc9707e55b38\") " pod="openshift-ingress-canary/ingress-canary-shvm4" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.298433 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d7c5983d-0780-410d-a88b-06063e0853c1-certs\") pod \"machine-config-server-2vc59\" (UID: \"d7c5983d-0780-410d-a88b-06063e0853c1\") " pod="openshift-machine-config-operator/machine-config-server-2vc59" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.298604 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-csi-data-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.299554 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6fa37cd2-a8e5-4624-91e2-6d249bdb7c87-profile-collector-cert\") pod \"catalog-operator-68c6474976-7ngw7\" (UID: \"6fa37cd2-a8e5-4624-91e2-6d249bdb7c87\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.299602 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ac6d201a-b05d-47ab-b71f-0859b88f0024-mountpoint-dir\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.300167 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6fa37cd2-a8e5-4624-91e2-6d249bdb7c87-srv-cert\") pod \"catalog-operator-68c6474976-7ngw7\" (UID: \"6fa37cd2-a8e5-4624-91e2-6d249bdb7c87\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.300403 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1adf70cf-02dc-4c30-9c35-6507314a4fa8-apiservice-cert\") pod \"packageserver-d55dfcdfc-q45cc\" (UID: \"1adf70cf-02dc-4c30-9c35-6507314a4fa8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.300870 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1adf70cf-02dc-4c30-9c35-6507314a4fa8-webhook-cert\") pod \"packageserver-d55dfcdfc-q45cc\" (UID: \"1adf70cf-02dc-4c30-9c35-6507314a4fa8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.301248 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/137b200e-5dcd-43c9-82e2-332071d84cb0-secret-volume\") pod \"collect-profiles-29498835-zbz9x\" (UID: \"137b200e-5dcd-43c9-82e2-332071d84cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.301950 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/137b200e-5dcd-43c9-82e2-332071d84cb0-config-volume\") pod \"collect-profiles-29498835-zbz9x\" (UID: \"137b200e-5dcd-43c9-82e2-332071d84cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.302289 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d7c5983d-0780-410d-a88b-06063e0853c1-node-bootstrap-token\") pod \"machine-config-server-2vc59\" (UID: \"d7c5983d-0780-410d-a88b-06063e0853c1\") " pod="openshift-machine-config-operator/machine-config-server-2vc59" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.302810 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/889e5fa5-6b80-4bc3-b19b-0d3621f7fceb-config\") pod \"service-ca-operator-777779d784-2cpj2\" (UID: \"889e5fa5-6b80-4bc3-b19b-0d3621f7fceb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.303684 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2708d65e-6013-4f55-9492-3a3ec5529d9b-signing-key\") pod \"service-ca-9c57cc56f-4qc29\" (UID: \"2708d65e-6013-4f55-9492-3a3ec5529d9b\") " pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.306868 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/889e5fa5-6b80-4bc3-b19b-0d3621f7fceb-serving-cert\") pod \"service-ca-operator-777779d784-2cpj2\" (UID: \"889e5fa5-6b80-4bc3-b19b-0d3621f7fceb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.307391 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8615180e-fc31-41b2-ad59-5ae2e48af5a2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mjg6g\" (UID: \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.307790 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9db65efb-d721-45dc-87a6-6ef40be6789d-metrics-tls\") pod \"dns-default-gmr7g\" (UID: \"9db65efb-d721-45dc-87a6-6ef40be6789d\") " pod="openshift-dns/dns-default-gmr7g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.308978 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8615180e-fc31-41b2-ad59-5ae2e48af5a2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mjg6g\" (UID: \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.313245 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8fa1edf3-e0a6-4d1a-aa61-172397ca736b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9t7c7\" (UID: \"8fa1edf3-e0a6-4d1a-aa61-172397ca736b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.335927 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m"] Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.337864 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnj8w\" (UniqueName: \"kubernetes.io/projected/a67dd2fd-8463-4887-94b7-405df03c5c0a-kube-api-access-hnj8w\") pod \"control-plane-machine-set-operator-78cbb6b69f-ngjw6\" (UID: \"a67dd2fd-8463-4887-94b7-405df03c5c0a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.338200 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b9309ebb-034c-47a1-9328-62fda6feabbd-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dk9xj\" (UID: \"b9309ebb-034c-47a1-9328-62fda6feabbd\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.353739 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-84rg2\" (UID: \"a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.354378 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" event={"ID":"19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1","Type":"ContainerStarted","Data":"0c8d8bf889d5b4e67fae72cc4e06aef9d04f3f8b5dd91f77a362cddcf40445dd"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.354441 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" event={"ID":"19e98f8d-2de0-4a3b-b9b5-a18f2c65a0d1","Type":"ContainerStarted","Data":"0cc056cbdcfb51ec2f5356b71f8fee4b3804bf88cc6198d36f0566ef3eba9819"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.381846 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-sdz4h" event={"ID":"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d","Type":"ContainerStarted","Data":"cfdbb9382b4a307422d07dd9da4e5828e9c2347ea85b34cc139a1fdbb4a035cb"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.381905 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-sdz4h" event={"ID":"6f01f600-cee2-4257-9c5f-a0b7edcd7a9d","Type":"ContainerStarted","Data":"231178deecefe36414e937a38e60842b4f77ff81b48615ff70990bc2a4afcd57"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.383286 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b4ls\" (UniqueName: \"kubernetes.io/projected/1d5a72cc-b727-4dcf-85cd-d039dc785b65-kube-api-access-7b4ls\") pod \"multus-admission-controller-857f4d67dd-fbdw8\" (UID: \"1d5a72cc-b727-4dcf-85cd-d039dc785b65\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fbdw8" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.386019 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hpgql"] Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.386322 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:29 crc kubenswrapper[4835]: E0201 07:24:29.386527 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:29.886403198 +0000 UTC m=+143.006839632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.386788 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" event={"ID":"62724c3f-5c92-4e77-ba3a-0f6b7215f48a","Type":"ContainerStarted","Data":"3ce1b71be758dd076de182606cb238305ec470a936ab71da41c867e65c4d55e4"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.386840 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" event={"ID":"62724c3f-5c92-4e77-ba3a-0f6b7215f48a","Type":"ContainerStarted","Data":"b228e669bd5b200a2abbd929c9ec6fc4843ea07663488a746bc7f94dc855f949"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.387253 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.388169 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-t4w45" event={"ID":"fb0c8a64-40d8-4fff-8ca4-b573df90cd88","Type":"ContainerStarted","Data":"3bf00974b0d34ae35d2bfd61912fedaaf2ebd32b9923f911d9959c0ad49e8b0e"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.388220 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-t4w45" event={"ID":"fb0c8a64-40d8-4fff-8ca4-b573df90cd88","Type":"ContainerStarted","Data":"d51b4fc642f7c878f9877442c494fad180b69c834a01b2bad4b512a8a9ef9017"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.388943 4835 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-tkff4 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.388968 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" podUID="62724c3f-5c92-4e77-ba3a-0f6b7215f48a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.389769 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.395809 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.396070 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" event={"ID":"29ce863d-02cf-43c6-a249-bfef15cf04be","Type":"ContainerStarted","Data":"ed5e74bf81ffed845454cbb65c9397567d1c1161ae07f413f27a6ca69f988c8c"} Feb 01 07:24:29 crc kubenswrapper[4835]: E0201 07:24:29.396099 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:29.896085833 +0000 UTC m=+143.016522267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.398210 4835 patch_prober.go:28] interesting pod/console-operator-58897d9998-t4w45 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.398241 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-t4w45" podUID="fb0c8a64-40d8-4fff-8ca4-b573df90cd88" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.398901 4835 generic.go:334] "Generic (PLEG): container finished" podID="90833a57-ccdb-452f-b86a-7741f52c5a80" containerID="8625565a7389eb8ce101d247c43d8245dc3db5255fb26f6c90bb912fde432587" exitCode=0 Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.398972 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" event={"ID":"90833a57-ccdb-452f-b86a-7741f52c5a80","Type":"ContainerDied","Data":"8625565a7389eb8ce101d247c43d8245dc3db5255fb26f6c90bb912fde432587"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.398991 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" event={"ID":"90833a57-ccdb-452f-b86a-7741f52c5a80","Type":"ContainerStarted","Data":"86004b2c0e2950bc7fcf4234811311c6c60c4eb8bbeb3a5fbbf9c12d3ebba80e"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.427097 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" event={"ID":"8924e4db-3c47-4e66-90d1-e74e49f3a65d","Type":"ContainerStarted","Data":"6826a1aa80ff4a7e5da8fd738d69d41ba45e2b1f073216a466b2446b1d67804b"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.427127 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" event={"ID":"8924e4db-3c47-4e66-90d1-e74e49f3a65d","Type":"ContainerStarted","Data":"1d62c0b30da0cbadfd81b94c8bdf7068b408ef05a4aad70f3bdb381e971ba966"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.433404 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-k8v8n" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.435593 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhft7\" (UniqueName: \"kubernetes.io/projected/8615180e-fc31-41b2-ad59-5ae2e48af5a2-kube-api-access-jhft7\") pod \"marketplace-operator-79b997595-mjg6g\" (UID: \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.436576 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq"] Feb 01 07:24:29 crc kubenswrapper[4835]: W0201 07:24:29.442029 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79f19c84_0217_4b08_8b4d_663096ce67b4.slice/crio-88a43a32aeb11a7266228e44e96343168e4ad3f4bf296e26425609793a59a308 WatchSource:0}: Error finding container 88a43a32aeb11a7266228e44e96343168e4ad3f4bf296e26425609793a59a308: Status 404 returned error can't find the container with id 88a43a32aeb11a7266228e44e96343168e4ad3f4bf296e26425609793a59a308 Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.447374 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghbdt\" (UniqueName: \"kubernetes.io/projected/6fa37cd2-a8e5-4624-91e2-6d249bdb7c87-kube-api-access-ghbdt\") pod \"catalog-operator-68c6474976-7ngw7\" (UID: \"6fa37cd2-a8e5-4624-91e2-6d249bdb7c87\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.448241 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8hgqx" event={"ID":"9154a093-1841-44f5-a71d-e42f5c19dfba","Type":"ContainerStarted","Data":"a348337aab744f36739678abd65aa608388ee645e9993d277c3a572b6423e421"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.448288 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8hgqx" event={"ID":"9154a093-1841-44f5-a71d-e42f5c19dfba","Type":"ContainerStarted","Data":"3f57da290e1a59ebf25ad55f2d58c4b9d8676678ad28d426555b782b4447196b"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.449945 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" event={"ID":"03f29b26-d2bd-48e2-9804-c90a5315658c","Type":"ContainerStarted","Data":"9750538fca96b0766c066bfb611cde62365bf6afe42ad480a5b8b02e34a2a487"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.457604 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" event={"ID":"5a08e2a1-3eff-4271-bfd3-e0366c8da3e0","Type":"ContainerStarted","Data":"04064e9f1a6039a4ab3beed3cb2a5adc02aef26bc6065116b8fa4bbae7f5f049"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.463468 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" event={"ID":"cad3b595-c72f-49b8-92e0-932f9f591375","Type":"ContainerStarted","Data":"fc58fb551f0a225b076d6aed0819c29feae8a582e1526867b4698f5211360397"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.463499 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" event={"ID":"cad3b595-c72f-49b8-92e0-932f9f591375","Type":"ContainerStarted","Data":"2271121ff70fd4f78d2c75e5e8c785b61155e20a1253d6b08d8d07df78c9f569"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.473245 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9bww\" (UniqueName: \"kubernetes.io/projected/9db65efb-d721-45dc-87a6-6ef40be6789d-kube-api-access-v9bww\") pod \"dns-default-gmr7g\" (UID: \"9db65efb-d721-45dc-87a6-6ef40be6789d\") " pod="openshift-dns/dns-default-gmr7g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.475863 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.481656 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" event={"ID":"865ec974-02ed-4218-a599-cf69b6f0a538","Type":"ContainerStarted","Data":"4e9b31596b21d04c3a40ccdc783f37b02e14876d7c408c95a469101f164236bf"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.483484 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.484899 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxn9g\" (UniqueName: \"kubernetes.io/projected/889e5fa5-6b80-4bc3-b19b-0d3621f7fceb-kube-api-access-jxn9g\") pod \"service-ca-operator-777779d784-2cpj2\" (UID: \"889e5fa5-6b80-4bc3-b19b-0d3621f7fceb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.500059 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.500226 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49g4h\" (UniqueName: \"kubernetes.io/projected/137b200e-5dcd-43c9-82e2-332071d84cb0-kube-api-access-49g4h\") pod \"collect-profiles-29498835-zbz9x\" (UID: \"137b200e-5dcd-43c9-82e2-332071d84cb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" Feb 01 07:24:29 crc kubenswrapper[4835]: E0201 07:24:29.500844 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:30.000829234 +0000 UTC m=+143.121265668 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.525229 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bztv4" event={"ID":"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c","Type":"ContainerStarted","Data":"87b9c9b22d193dcf8d26bb1e24cb0941aa1472eca81e4cb52d77be7e83a463bf"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.525266 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bztv4" event={"ID":"bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c","Type":"ContainerStarted","Data":"ef5535503991c96116fd319cea061c35750484864c7e9af1184dda44676f65ff"} Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.528288 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5rdt\" (UniqueName: \"kubernetes.io/projected/2708d65e-6013-4f55-9492-3a3ec5529d9b-kube-api-access-c5rdt\") pod \"service-ca-9c57cc56f-4qc29\" (UID: \"2708d65e-6013-4f55-9492-3a3ec5529d9b\") " pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.529536 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.544783 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fbdw8" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.546096 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swnqf\" (UniqueName: \"kubernetes.io/projected/8fa1edf3-e0a6-4d1a-aa61-172397ca736b-kube-api-access-swnqf\") pod \"package-server-manager-789f6589d5-9t7c7\" (UID: \"8fa1edf3-e0a6-4d1a-aa61-172397ca736b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.550438 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.568338 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.571930 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.576351 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjnzv\" (UniqueName: \"kubernetes.io/projected/1adf70cf-02dc-4c30-9c35-6507314a4fa8-kube-api-access-kjnzv\") pod \"packageserver-d55dfcdfc-q45cc\" (UID: \"1adf70cf-02dc-4c30-9c35-6507314a4fa8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.588605 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb"] Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.591429 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:29 crc kubenswrapper[4835]: W0201 07:24:29.594535 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod230baada_7ff6_4b95_b44f_b46e54fe1375.slice/crio-daaf1da826328e0631ea38f51d220f0cf04fade3ac5661497df679efe4098dea WatchSource:0}: Error finding container daaf1da826328e0631ea38f51d220f0cf04fade3ac5661497df679efe4098dea: Status 404 returned error can't find the container with id daaf1da826328e0631ea38f51d220f0cf04fade3ac5661497df679efe4098dea Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.594679 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.600362 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd7rd\" (UniqueName: \"kubernetes.io/projected/d18912d2-49bb-4779-9b02-fc9707e55b38-kube-api-access-bd7rd\") pod \"ingress-canary-shvm4\" (UID: \"d18912d2-49bb-4779-9b02-fc9707e55b38\") " pod="openshift-ingress-canary/ingress-canary-shvm4" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.604203 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.604591 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l9fw\" (UniqueName: \"kubernetes.io/projected/d7c5983d-0780-410d-a88b-06063e0853c1-kube-api-access-7l9fw\") pod \"machine-config-server-2vc59\" (UID: \"d7c5983d-0780-410d-a88b-06063e0853c1\") " pod="openshift-machine-config-operator/machine-config-server-2vc59" Feb 01 07:24:29 crc kubenswrapper[4835]: E0201 07:24:29.606231 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:30.106219163 +0000 UTC m=+143.226655597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.607345 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.608037 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.625541 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gmr7g" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.646213 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2vc59" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.649195 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-shvm4" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.669442 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m47vr\" (UniqueName: \"kubernetes.io/projected/ac6d201a-b05d-47ab-b71f-0859b88f0024-kube-api-access-m47vr\") pod \"csi-hostpathplugin-xwsnp\" (UID: \"ac6d201a-b05d-47ab-b71f-0859b88f0024\") " pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.687637 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s"] Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.716988 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:29 crc kubenswrapper[4835]: E0201 07:24:29.717352 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:30.217337592 +0000 UTC m=+143.337774026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.719704 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-xgqrp"] Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.752640 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs"] Feb 01 07:24:29 crc kubenswrapper[4835]: W0201 07:24:29.815013 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87528d59_5bdb_4e92_8d6e_062005390f6f.slice/crio-e8b33f44a5103bf68688ec78819c55f87f513c0485bf13af4281fcbc8e5592cb WatchSource:0}: Error finding container e8b33f44a5103bf68688ec78819c55f87f513c0485bf13af4281fcbc8e5592cb: Status 404 returned error can't find the container with id e8b33f44a5103bf68688ec78819c55f87f513c0485bf13af4281fcbc8e5592cb Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.823162 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: E0201 07:24:29.826257 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:30.326240983 +0000 UTC m=+143.446677427 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.846795 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-7nw98"] Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.887985 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc"] Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.892979 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.913099 4835 patch_prober.go:28] interesting pod/router-default-5444994796-sdz4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 07:24:29 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 01 07:24:29 crc kubenswrapper[4835]: [+]process-running ok Feb 01 07:24:29 crc kubenswrapper[4835]: healthz check failed Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.913370 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sdz4h" podUID="6f01f600-cee2-4257-9c5f-a0b7edcd7a9d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.924044 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:29 crc kubenswrapper[4835]: E0201 07:24:29.924325 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:30.424301718 +0000 UTC m=+143.544738152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.924396 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:29 crc kubenswrapper[4835]: E0201 07:24:29.924922 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:30.424916065 +0000 UTC m=+143.545352499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:29 crc kubenswrapper[4835]: I0201 07:24:29.932481 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.025144 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:30 crc kubenswrapper[4835]: E0201 07:24:30.025697 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:30.525682521 +0000 UTC m=+143.646118955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.039113 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z"] Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.107285 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc"] Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.126969 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:30 crc kubenswrapper[4835]: E0201 07:24:30.127263 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:30.627253069 +0000 UTC m=+143.747689503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.230894 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:30 crc kubenswrapper[4835]: E0201 07:24:30.231651 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:30.731636331 +0000 UTC m=+143.852072765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.333842 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:30 crc kubenswrapper[4835]: E0201 07:24:30.334419 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:30.83439329 +0000 UTC m=+143.954829724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.434761 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:30 crc kubenswrapper[4835]: E0201 07:24:30.435359 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:30.935344571 +0000 UTC m=+144.055781005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.526476 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2"] Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.540540 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:30 crc kubenswrapper[4835]: E0201 07:24:30.540887 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:31.040874513 +0000 UTC m=+144.161310947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.576942 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-k8v8n"] Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.592681 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" event={"ID":"865ec974-02ed-4218-a599-cf69b6f0a538","Type":"ContainerStarted","Data":"a0fab6d9e455159d489fdf15be4c4ddcbde57a5a92798a83d9a6e85cb794401a"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.593790 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" event={"ID":"d597b1c7-2562-45a2-b301-14d0db548bc8","Type":"ContainerStarted","Data":"f82723bbd3eeabc33c2405380596f767a2cbfda7b3b21dc793212ff339c7a64c"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.594355 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s" event={"ID":"92112e1c-6b23-4d10-9f2b-0e33616c96f5","Type":"ContainerStarted","Data":"84904def63e019017a6ac04b6a4a875d059601712a141e29217d78b2543f4131"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.598807 4835 generic.go:334] "Generic (PLEG): container finished" podID="03f29b26-d2bd-48e2-9804-c90a5315658c" containerID="ffb133c9b412f2d348c9b6505beae9d6667bbe2a7616c009fef89ad96ac058eb" exitCode=0 Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.598847 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" event={"ID":"03f29b26-d2bd-48e2-9804-c90a5315658c","Type":"ContainerDied","Data":"ffb133c9b412f2d348c9b6505beae9d6667bbe2a7616c009fef89ad96ac058eb"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.607899 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" event={"ID":"9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe","Type":"ContainerStarted","Data":"a50783c21ddc727571ad09de7a7248b3fba9c084e6adc2f62ee12b791522c8b8"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.609595 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj"] Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.612009 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" podStartSLOduration=117.611994859 podStartE2EDuration="1m57.611994859s" podCreationTimestamp="2026-02-01 07:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:30.608072185 +0000 UTC m=+143.728508619" watchObservedRunningTime="2026-02-01 07:24:30.611994859 +0000 UTC m=+143.732431293" Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.617238 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7nw98" event={"ID":"8589782d-8533-4419-b9bf-115446144a39","Type":"ContainerStarted","Data":"d68ea96081a91cbc2c68481f8ec66bcb26682a6c3e8e11909233b29a55eeb908"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.631605 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" event={"ID":"79f19c84-0217-4b08-8b4d-663096ce67b4","Type":"ContainerStarted","Data":"46bc09af32b8d9716f53039e3e62c795226e8f9e49a4260bebbca463ed20a624"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.631694 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" event={"ID":"79f19c84-0217-4b08-8b4d-663096ce67b4","Type":"ContainerStarted","Data":"88a43a32aeb11a7266228e44e96343168e4ad3f4bf296e26425609793a59a308"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.631838 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.643918 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:30 crc kubenswrapper[4835]: E0201 07:24:30.645215 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:31.145197994 +0000 UTC m=+144.265634428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.648389 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" event={"ID":"8924e4db-3c47-4e66-90d1-e74e49f3a65d","Type":"ContainerStarted","Data":"97dbbec403cdf097004c054b134743aca0a923e14c41eacb5b0f64ceb3368b74"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.651470 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" event={"ID":"60b0275a-57b6-482d-b046-ffd270801add","Type":"ContainerStarted","Data":"44fdeff9b4db5725de72107a92d6616daa4dfd03e29b3f455ffcdf49c0c3d090"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.652440 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" event={"ID":"a86cb99d-3be8-4acb-98f7-87c5df66c339","Type":"ContainerStarted","Data":"836a4e99a30dc06078255d861eba18ce6360993f061c0f528c15d4ba51ec34c8"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.652463 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" event={"ID":"a86cb99d-3be8-4acb-98f7-87c5df66c339","Type":"ContainerStarted","Data":"579fd3a8b9da3927d34eedf1e5b918879be38fa019c6f33ffd06c053ee0996cf"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.653529 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" event={"ID":"1adf70cf-02dc-4c30-9c35-6507314a4fa8","Type":"ContainerStarted","Data":"9f67f0cf8faaaf9cb7d8e5b78ebab084593afb4b59d7daa5df0a7a15802ec1f9"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.664025 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" event={"ID":"5a08e2a1-3eff-4271-bfd3-e0366c8da3e0","Type":"ContainerStarted","Data":"80b5c7e9e0d040ec563b62966f2af8084efe396a0d8a11d4f5ec0724b439cf3e"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.664857 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2vc59" event={"ID":"d7c5983d-0780-410d-a88b-06063e0853c1","Type":"ContainerStarted","Data":"91bc94d571f384c937a2764c3bf071a836490dd42722ffdecdec5838001dc378"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.671466 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" event={"ID":"29ce863d-02cf-43c6-a249-bfef15cf04be","Type":"ContainerStarted","Data":"fba4b5a44391325820f99ce8185b2d3a3d092896e46650dbf0eb4db7c4061b19"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.681847 4835 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-hpgql container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.681901 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" podUID="79f19c84-0217-4b08-8b4d-663096ce67b4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.716432 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" event={"ID":"90833a57-ccdb-452f-b86a-7741f52c5a80","Type":"ContainerStarted","Data":"0f1d8581f1c88d783d60eebe5654895b6af72c09b306a277f76195a06116b890"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.720103 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.738633 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" event={"ID":"863e130d-2f68-47ef-8b6c-2871d38a2282","Type":"ContainerStarted","Data":"05ed26f845aaac5c630e08f419c563f115897f945e20c4def0d966f253b5549c"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.745297 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:30 crc kubenswrapper[4835]: E0201 07:24:30.751516 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:31.251500176 +0000 UTC m=+144.371936610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.760591 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xgqrp" event={"ID":"87528d59-5bdb-4e92-8d6e-062005390f6f","Type":"ContainerStarted","Data":"e8b33f44a5103bf68688ec78819c55f87f513c0485bf13af4281fcbc8e5592cb"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.808946 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-8hgqx" podStartSLOduration=118.80892847 podStartE2EDuration="1m58.80892847s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:30.806800714 +0000 UTC m=+143.927237148" watchObservedRunningTime="2026-02-01 07:24:30.80892847 +0000 UTC m=+143.929364904" Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.809151 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" event={"ID":"230baada-7ff6-4b95-b44f-b46e54fe1375","Type":"ContainerStarted","Data":"daaf1da826328e0631ea38f51d220f0cf04fade3ac5661497df679efe4098dea"} Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.846921 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:30 crc kubenswrapper[4835]: E0201 07:24:30.850124 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:31.350087525 +0000 UTC m=+144.470523959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.886334 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" podStartSLOduration=118.886317451 podStartE2EDuration="1m58.886317451s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:30.884859762 +0000 UTC m=+144.005296196" watchObservedRunningTime="2026-02-01 07:24:30.886317451 +0000 UTC m=+144.006753875" Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.897530 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gmr7g"] Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.949094 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:30 crc kubenswrapper[4835]: E0201 07:24:30.952245 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:31.452224378 +0000 UTC m=+144.572660812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.974628 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-t4w45" Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.982151 4835 patch_prober.go:28] interesting pod/router-default-5444994796-sdz4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 07:24:30 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 01 07:24:30 crc kubenswrapper[4835]: [+]process-running ok Feb 01 07:24:30 crc kubenswrapper[4835]: healthz check failed Feb 01 07:24:30 crc kubenswrapper[4835]: I0201 07:24:30.982360 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sdz4h" podUID="6f01f600-cee2-4257-9c5f-a0b7edcd7a9d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.049783 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:31 crc kubenswrapper[4835]: E0201 07:24:31.050126 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:31.550112719 +0000 UTC m=+144.670549153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.138663 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-bztv4" podStartSLOduration=119.138639962 podStartE2EDuration="1m59.138639962s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:31.096613074 +0000 UTC m=+144.217049508" watchObservedRunningTime="2026-02-01 07:24:31.138639962 +0000 UTC m=+144.259076396" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.138964 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-whqd4" podStartSLOduration=119.13895859 podStartE2EDuration="1m59.13895859s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:31.130504367 +0000 UTC m=+144.250940791" watchObservedRunningTime="2026-02-01 07:24:31.13895859 +0000 UTC m=+144.259395024" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.154473 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:31 crc kubenswrapper[4835]: E0201 07:24:31.154767 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:31.654755047 +0000 UTC m=+144.775191481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.155290 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-x4ddr" podStartSLOduration=119.1552699 podStartE2EDuration="1m59.1552699s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:31.154801258 +0000 UTC m=+144.275237702" watchObservedRunningTime="2026-02-01 07:24:31.1552699 +0000 UTC m=+144.275706334" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.174427 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.256908 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:31 crc kubenswrapper[4835]: E0201 07:24:31.257765 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:31.757750292 +0000 UTC m=+144.878186716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.315496 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xwsnp"] Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.354254 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x"] Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.355422 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-t4w45" podStartSLOduration=119.355391786 podStartE2EDuration="1m59.355391786s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:31.354689578 +0000 UTC m=+144.475126042" watchObservedRunningTime="2026-02-01 07:24:31.355391786 +0000 UTC m=+144.475828220" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.362972 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:31 crc kubenswrapper[4835]: E0201 07:24:31.363301 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:31.863283444 +0000 UTC m=+144.983719878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.385447 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-shvm4"] Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.386119 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7"] Feb 01 07:24:31 crc kubenswrapper[4835]: W0201 07:24:31.405116 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod137b200e_5dcd_43c9_82e2_332071d84cb0.slice/crio-42603d073e4ca627863068ad7515b8856291ec8428ed3ebba7f5fa565c3a76d5 WatchSource:0}: Error finding container 42603d073e4ca627863068ad7515b8856291ec8428ed3ebba7f5fa565c3a76d5: Status 404 returned error can't find the container with id 42603d073e4ca627863068ad7515b8856291ec8428ed3ebba7f5fa565c3a76d5 Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.415811 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4qc29"] Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.453487 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2"] Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.457739 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7"] Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.487667 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fbdw8"] Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.495586 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:31 crc kubenswrapper[4835]: E0201 07:24:31.496303 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:31.99628132 +0000 UTC m=+145.116717744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.499788 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mjg6g"] Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.501037 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-sdz4h" podStartSLOduration=119.501028165 podStartE2EDuration="1m59.501028165s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:31.493431775 +0000 UTC m=+144.613868229" watchObservedRunningTime="2026-02-01 07:24:31.501028165 +0000 UTC m=+144.621464599" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.551895 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-547k6" podStartSLOduration=119.551882196 podStartE2EDuration="1m59.551882196s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:31.549546035 +0000 UTC m=+144.669982479" watchObservedRunningTime="2026-02-01 07:24:31.551882196 +0000 UTC m=+144.672318630" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.588609 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6"] Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.596891 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:31 crc kubenswrapper[4835]: E0201 07:24:31.597192 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:32.09718167 +0000 UTC m=+145.217618104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.598146 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dj84j" podStartSLOduration=119.598130385 podStartE2EDuration="1m59.598130385s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:31.589483787 +0000 UTC m=+144.709920221" watchObservedRunningTime="2026-02-01 07:24:31.598130385 +0000 UTC m=+144.718566819" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.688028 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" podStartSLOduration=119.687992705 podStartE2EDuration="1m59.687992705s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:31.682054858 +0000 UTC m=+144.802491302" watchObservedRunningTime="2026-02-01 07:24:31.687992705 +0000 UTC m=+144.808429139" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.702249 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:31 crc kubenswrapper[4835]: E0201 07:24:31.702530 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:32.202515637 +0000 UTC m=+145.322952071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.792766 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" podStartSLOduration=119.792726366 podStartE2EDuration="1m59.792726366s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:31.764400939 +0000 UTC m=+144.884837383" watchObservedRunningTime="2026-02-01 07:24:31.792726366 +0000 UTC m=+144.913162800" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.811504 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:31 crc kubenswrapper[4835]: E0201 07:24:31.811835 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:32.311820459 +0000 UTC m=+145.432256893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.824441 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" event={"ID":"2708d65e-6013-4f55-9492-3a3ec5529d9b","Type":"ContainerStarted","Data":"68752236067db4daa25ed6dad49be45b5927ec7f0c7dabea55b260a167d003e7"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.828472 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-kp87b" podStartSLOduration=119.828461708 podStartE2EDuration="1m59.828461708s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:31.827193114 +0000 UTC m=+144.947629548" watchObservedRunningTime="2026-02-01 07:24:31.828461708 +0000 UTC m=+144.948898142" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.829575 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-zq4gf" podStartSLOduration=119.829567957 podStartE2EDuration="1m59.829567957s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:31.793611329 +0000 UTC m=+144.914047763" watchObservedRunningTime="2026-02-01 07:24:31.829567957 +0000 UTC m=+144.950004381" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.836799 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" event={"ID":"b9309ebb-034c-47a1-9328-62fda6feabbd","Type":"ContainerStarted","Data":"34c45a02e198c151baf79c6bd9ea077b132518649f498984b1e6edc4d52e38af"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.836843 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" event={"ID":"b9309ebb-034c-47a1-9328-62fda6feabbd","Type":"ContainerStarted","Data":"902927a557ceae22017b09801beb858c2156478a96d2475071eea4cdead37291"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.849461 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" event={"ID":"9fdcaba3-a4b8-4c71-8ed1-ee03534d1ebe","Type":"ContainerStarted","Data":"45bfc4d84532a62ff8085f9c09c0be824a7a4582ab463f16cdf3f5794b587e23"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.859931 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7nw98" event={"ID":"8589782d-8533-4419-b9bf-115446144a39","Type":"ContainerStarted","Data":"8ff3274645aac068a73a6dd08b164a26ce7d623f6f8b3154e10e7315f5707261"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.859972 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7nw98" event={"ID":"8589782d-8533-4419-b9bf-115446144a39","Type":"ContainerStarted","Data":"34bf34fa75c6a937a871093522648f070a9aaff3a3f14d50555984d54d2dc781"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.862036 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" event={"ID":"a86cb99d-3be8-4acb-98f7-87c5df66c339","Type":"ContainerStarted","Data":"52d86496d8b51eeb24c85722fdab7a4b2e02fa19d13f64b3364dc684926182c1"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.865146 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" event={"ID":"a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2","Type":"ContainerStarted","Data":"d0f923b266c0e02367584fffa4072913a3a0674a08e4c883dd5e6d0420893cf9"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.866570 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" event={"ID":"137b200e-5dcd-43c9-82e2-332071d84cb0","Type":"ContainerStarted","Data":"42603d073e4ca627863068ad7515b8856291ec8428ed3ebba7f5fa565c3a76d5"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.870788 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5bhlf" podStartSLOduration=119.870772373 podStartE2EDuration="1m59.870772373s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:31.867310662 +0000 UTC m=+144.987747096" watchObservedRunningTime="2026-02-01 07:24:31.870772373 +0000 UTC m=+144.991208817" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.874076 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" event={"ID":"1adf70cf-02dc-4c30-9c35-6507314a4fa8","Type":"ContainerStarted","Data":"9497174ae637109963bc6730afc85375c0d536a7ac88093bf3498002c11eb52f"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.875264 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.879258 4835 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-q45cc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" start-of-body= Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.879294 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" podUID="1adf70cf-02dc-4c30-9c35-6507314a4fa8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.884904 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-k8v8n" event={"ID":"79c369eb-e17d-4a32-9167-934aa23fd4fc","Type":"ContainerStarted","Data":"a9b95c2516ea1eabe650c1202217a4a89526c836103934183929279617805fc6"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.884939 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-k8v8n" event={"ID":"79c369eb-e17d-4a32-9167-934aa23fd4fc","Type":"ContainerStarted","Data":"bb716b6d8958f78a4275e57edcbac3cb15c220499f25088429bb8a8d6d5387bc"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.885440 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-k8v8n" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.887824 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s" event={"ID":"92112e1c-6b23-4d10-9f2b-0e33616c96f5","Type":"ContainerStarted","Data":"fb0b1d1b79fd3893cb8f2c62f378e09b996faf79f605171f4ad50b84dbf9d01f"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.888235 4835 patch_prober.go:28] interesting pod/downloads-7954f5f757-k8v8n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.888269 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k8v8n" podUID="79c369eb-e17d-4a32-9167-934aa23fd4fc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.890354 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" event={"ID":"03f29b26-d2bd-48e2-9804-c90a5315658c","Type":"ContainerStarted","Data":"500efa79ef1199130e63fdd3b869fb6281b92c3f15f0f107143951cc15ae6a54"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.893643 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" event={"ID":"863e130d-2f68-47ef-8b6c-2871d38a2282","Type":"ContainerStarted","Data":"18d37d8e45f49ce4cf4dd8ee9eba9e125d812be79e3bfd7cb1db8ec39b52f7fb"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.894742 4835 patch_prober.go:28] interesting pod/router-default-5444994796-sdz4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 07:24:31 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 01 07:24:31 crc kubenswrapper[4835]: [+]process-running ok Feb 01 07:24:31 crc kubenswrapper[4835]: healthz check failed Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.894780 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sdz4h" podUID="6f01f600-cee2-4257-9c5f-a0b7edcd7a9d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.895945 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" event={"ID":"8fa1edf3-e0a6-4d1a-aa61-172397ca736b","Type":"ContainerStarted","Data":"21f9211dadfb31994ca7f72cf7d9d116a26133f97e51e7c5f56079086f180d06"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.897308 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2vc59" event={"ID":"d7c5983d-0780-410d-a88b-06063e0853c1","Type":"ContainerStarted","Data":"8d7e23c3615eeb7d20bbb7bd4bc75d40abef226640be2b1ea7f935bf7023ec6d"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.898529 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" event={"ID":"8615180e-fc31-41b2-ad59-5ae2e48af5a2","Type":"ContainerStarted","Data":"756ac183cdf318bae9818cbd3f3e4f67346c6974661fa7194394a92f9755088e"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.900056 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" event={"ID":"889e5fa5-6b80-4bc3-b19b-0d3621f7fceb","Type":"ContainerStarted","Data":"7e3f187c0d5740afc8abd1fba600a581ae1f40b6007c292cf372af7333a6e571"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.900079 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" event={"ID":"889e5fa5-6b80-4bc3-b19b-0d3621f7fceb","Type":"ContainerStarted","Data":"0baa23da9c1e34ce11481805880843d424a6c693b3b430d5d88f755bed846eac"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.901173 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" event={"ID":"60b0275a-57b6-482d-b046-ffd270801add","Type":"ContainerStarted","Data":"b680a823fc58f2fc572df89b86270e1e92f77caca75736248ef8021a98647306"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.901331 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.902573 4835 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-p5fjs container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.902607 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" podUID="60b0275a-57b6-482d-b046-ffd270801add" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.904620 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gmr7g" event={"ID":"9db65efb-d721-45dc-87a6-6ef40be6789d","Type":"ContainerStarted","Data":"a78ec2f4a3d7af98fa72594c73af17af831ae788fbcaca6dc1d60924281c8a26"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.904671 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gmr7g" event={"ID":"9db65efb-d721-45dc-87a6-6ef40be6789d","Type":"ContainerStarted","Data":"45c6d1f550f80c91e3f97631abbe801f186c623ed96faf0ad8e749a3db4d4059"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.906574 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fbdw8" event={"ID":"1d5a72cc-b727-4dcf-85cd-d039dc785b65","Type":"ContainerStarted","Data":"7bf69ff9b086d3ab804b8474505b9e9ec776906b696cbf8354574eacdea008b9"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.910186 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xgqrp" event={"ID":"87528d59-5bdb-4e92-8d6e-062005390f6f","Type":"ContainerStarted","Data":"97a783040f0e0e229bc6e7b0fbac7de4489d3ced77713afe5be1277fc9812001"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.912313 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:31 crc kubenswrapper[4835]: E0201 07:24:31.912428 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:32.4123919 +0000 UTC m=+145.532828334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.913320 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" event={"ID":"6fa37cd2-a8e5-4624-91e2-6d249bdb7c87","Type":"ContainerStarted","Data":"62a0ffdd73fe6b4dcb10967bb2153470901670653ea73aea1bdc348653b1df73"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.913541 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:31 crc kubenswrapper[4835]: E0201 07:24:31.913926 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:32.413913801 +0000 UTC m=+145.534350235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.933767 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" event={"ID":"230baada-7ff6-4b95-b44f-b46e54fe1375","Type":"ContainerStarted","Data":"20df8a69a62cd6c0bf4f5b7e6a30fa0331596600892443bcd5207a2cda8ec740"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.933808 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" event={"ID":"230baada-7ff6-4b95-b44f-b46e54fe1375","Type":"ContainerStarted","Data":"b5fa1f0c353b2e821d299637df5ca8511d07ab552242aac524d7494f3b468896"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.935680 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" event={"ID":"d597b1c7-2562-45a2-b301-14d0db548bc8","Type":"ContainerStarted","Data":"44361d82b8573e1975cd63e85890433ba380a08b2517bbecdcd75ca66f8b32ac"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.937699 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6" event={"ID":"a67dd2fd-8463-4887-94b7-405df03c5c0a","Type":"ContainerStarted","Data":"eac7af213b671e8be276b4fee8f443830786071d41b22ae8b90397f3b0465f31"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.938696 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-shvm4" event={"ID":"d18912d2-49bb-4779-9b02-fc9707e55b38","Type":"ContainerStarted","Data":"7d325207245525c92020b9e53fd076f104c78ed508360aff40003de0377e4310"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.948603 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" event={"ID":"ac6d201a-b05d-47ab-b71f-0859b88f0024","Type":"ContainerStarted","Data":"29c32d2386fad600c22aaa50e690ade449039c568fa6280035cdf3cd047811e8"} Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.953725 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:24:31 crc kubenswrapper[4835]: I0201 07:24:31.990792 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" podStartSLOduration=118.990765417 podStartE2EDuration="1m58.990765417s" podCreationTimestamp="2026-02-01 07:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:31.98939063 +0000 UTC m=+145.109827064" watchObservedRunningTime="2026-02-01 07:24:31.990765417 +0000 UTC m=+145.111201851" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.016368 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:32 crc kubenswrapper[4835]: E0201 07:24:32.018016 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:32.518000855 +0000 UTC m=+145.638437289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.034436 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2cpj2" podStartSLOduration=119.034420938 podStartE2EDuration="1m59.034420938s" podCreationTimestamp="2026-02-01 07:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:32.032008984 +0000 UTC m=+145.152445418" watchObservedRunningTime="2026-02-01 07:24:32.034420938 +0000 UTC m=+145.154857372" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.078488 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.078839 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.118377 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:32 crc kubenswrapper[4835]: E0201 07:24:32.118661 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:32.618649108 +0000 UTC m=+145.739085542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.126003 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mflcb" podStartSLOduration=120.125980861 podStartE2EDuration="2m0.125980861s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:32.125504689 +0000 UTC m=+145.245941123" watchObservedRunningTime="2026-02-01 07:24:32.125980861 +0000 UTC m=+145.246417295" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.127686 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-2vc59" podStartSLOduration=6.127678676 podStartE2EDuration="6.127678676s" podCreationTimestamp="2026-02-01 07:24:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:32.079148957 +0000 UTC m=+145.199585391" watchObservedRunningTime="2026-02-01 07:24:32.127678676 +0000 UTC m=+145.248115110" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.155688 4835 patch_prober.go:28] interesting pod/apiserver-76f77b778f-bztv4 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 01 07:24:32 crc kubenswrapper[4835]: [+]log ok Feb 01 07:24:32 crc kubenswrapper[4835]: [+]etcd ok Feb 01 07:24:32 crc kubenswrapper[4835]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 01 07:24:32 crc kubenswrapper[4835]: [+]poststarthook/generic-apiserver-start-informers ok Feb 01 07:24:32 crc kubenswrapper[4835]: [+]poststarthook/max-in-flight-filter ok Feb 01 07:24:32 crc kubenswrapper[4835]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 01 07:24:32 crc kubenswrapper[4835]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 01 07:24:32 crc kubenswrapper[4835]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 01 07:24:32 crc kubenswrapper[4835]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 01 07:24:32 crc kubenswrapper[4835]: [+]poststarthook/project.openshift.io-projectcache ok Feb 01 07:24:32 crc kubenswrapper[4835]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 01 07:24:32 crc kubenswrapper[4835]: [+]poststarthook/openshift.io-startinformers ok Feb 01 07:24:32 crc kubenswrapper[4835]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 01 07:24:32 crc kubenswrapper[4835]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 01 07:24:32 crc kubenswrapper[4835]: livez check failed Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.156277 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-bztv4" podUID="bbc68445-c2f0-43a6-a4f5-6ea9b4a37d1c" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.219430 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:32 crc kubenswrapper[4835]: E0201 07:24:32.220555 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:32.720531664 +0000 UTC m=+145.840968148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.248570 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-k8v8n" podStartSLOduration=120.248557023 podStartE2EDuration="2m0.248557023s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:32.246338784 +0000 UTC m=+145.366775218" watchObservedRunningTime="2026-02-01 07:24:32.248557023 +0000 UTC m=+145.368993457" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.248873 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pqcsc" podStartSLOduration=120.248869381 podStartE2EDuration="2m0.248869381s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:32.198702719 +0000 UTC m=+145.319139153" watchObservedRunningTime="2026-02-01 07:24:32.248869381 +0000 UTC m=+145.369305815" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.269301 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-7nw98" podStartSLOduration=120.269282929 podStartE2EDuration="2m0.269282929s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:32.266205168 +0000 UTC m=+145.386641602" watchObservedRunningTime="2026-02-01 07:24:32.269282929 +0000 UTC m=+145.389719363" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.309796 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" podStartSLOduration=119.309763997 podStartE2EDuration="1m59.309763997s" podCreationTimestamp="2026-02-01 07:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:32.309355866 +0000 UTC m=+145.429792300" watchObservedRunningTime="2026-02-01 07:24:32.309763997 +0000 UTC m=+145.430200431" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.322083 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:32 crc kubenswrapper[4835]: E0201 07:24:32.322636 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:32.822615875 +0000 UTC m=+145.943052309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.350064 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f9wvq" podStartSLOduration=120.350032618 podStartE2EDuration="2m0.350032618s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:32.349261248 +0000 UTC m=+145.469697702" watchObservedRunningTime="2026-02-01 07:24:32.350032618 +0000 UTC m=+145.470469052" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.423753 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:32 crc kubenswrapper[4835]: E0201 07:24:32.424148 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:32.923976338 +0000 UTC m=+146.044412772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.424248 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:32 crc kubenswrapper[4835]: E0201 07:24:32.424602 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:32.924589894 +0000 UTC m=+146.045026328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.436147 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hch5m" podStartSLOduration=120.436130988 podStartE2EDuration="2m0.436130988s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:32.430762287 +0000 UTC m=+145.551198711" watchObservedRunningTime="2026-02-01 07:24:32.436130988 +0000 UTC m=+145.556567422" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.436796 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" podStartSLOduration=119.436791496 podStartE2EDuration="1m59.436791496s" podCreationTimestamp="2026-02-01 07:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:32.399474702 +0000 UTC m=+145.519911136" watchObservedRunningTime="2026-02-01 07:24:32.436791496 +0000 UTC m=+145.557227930" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.525768 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:32 crc kubenswrapper[4835]: E0201 07:24:32.526266 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:33.026248004 +0000 UTC m=+146.146684438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.627574 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:32 crc kubenswrapper[4835]: E0201 07:24:32.627907 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:33.127896034 +0000 UTC m=+146.248332468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.729084 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:32 crc kubenswrapper[4835]: E0201 07:24:32.729151 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:33.229137963 +0000 UTC m=+146.349574397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.730051 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:32 crc kubenswrapper[4835]: E0201 07:24:32.730443 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:33.230426717 +0000 UTC m=+146.350863151 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.830821 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:32 crc kubenswrapper[4835]: E0201 07:24:32.831120 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:33.331104561 +0000 UTC m=+146.451540995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.903843 4835 patch_prober.go:28] interesting pod/router-default-5444994796-sdz4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 07:24:32 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 01 07:24:32 crc kubenswrapper[4835]: [+]process-running ok Feb 01 07:24:32 crc kubenswrapper[4835]: healthz check failed Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.903908 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sdz4h" podUID="6f01f600-cee2-4257-9c5f-a0b7edcd7a9d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.932010 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:32 crc kubenswrapper[4835]: E0201 07:24:32.932332 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:33.432321029 +0000 UTC m=+146.552757463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.959310 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" event={"ID":"2708d65e-6013-4f55-9492-3a3ec5529d9b","Type":"ContainerStarted","Data":"1fa9cace0d69d21718b39696c686f85b3be7f0345f914ae3d9a34bad8ad4a720"} Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.960828 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gmr7g" event={"ID":"9db65efb-d721-45dc-87a6-6ef40be6789d","Type":"ContainerStarted","Data":"7410aee2264cb726d977a414fd7ee98edbca424f416ffa6ee95fe55527e928aa"} Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.960939 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-gmr7g" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.965770 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s" event={"ID":"92112e1c-6b23-4d10-9f2b-0e33616c96f5","Type":"ContainerStarted","Data":"a4ba8c26df926b27f7caeb56e4ddfd0a81bc628464bdbe1d1c3aa525acde89ee"} Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.967732 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-shvm4" event={"ID":"d18912d2-49bb-4779-9b02-fc9707e55b38","Type":"ContainerStarted","Data":"b1e8073e200344862f088de57d95ee584f40f1ca870f3606f943921a94eef26b"} Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.970829 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" event={"ID":"137b200e-5dcd-43c9-82e2-332071d84cb0","Type":"ContainerStarted","Data":"98c793df94b793188e86124f6ff1a8161f18d725c6666c0e72eb3d6113d10246"} Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.984808 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-xgqrp" event={"ID":"87528d59-5bdb-4e92-8d6e-062005390f6f","Type":"ContainerStarted","Data":"5c4e1c79dc0d0b672a12fe1fa78b8c2ca579c25b8fdff91205e2ce6b414d999a"} Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.987958 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" event={"ID":"6fa37cd2-a8e5-4624-91e2-6d249bdb7c87","Type":"ContainerStarted","Data":"80c14e64102bed211e8ffd95ca8632fc5102b3a423d5991aff48a1918f7f78f9"} Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.988216 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.989318 4835 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-7ngw7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.989358 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" podUID="6fa37cd2-a8e5-4624-91e2-6d249bdb7c87" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.989605 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-4qc29" podStartSLOduration=119.989593669 podStartE2EDuration="1m59.989593669s" podCreationTimestamp="2026-02-01 07:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:32.987207136 +0000 UTC m=+146.107643560" watchObservedRunningTime="2026-02-01 07:24:32.989593669 +0000 UTC m=+146.110030103" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.989995 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nr86z" podStartSLOduration=120.98998875 podStartE2EDuration="2m0.98998875s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:32.469846317 +0000 UTC m=+145.590282751" watchObservedRunningTime="2026-02-01 07:24:32.98998875 +0000 UTC m=+146.110425184" Feb 01 07:24:32 crc kubenswrapper[4835]: I0201 07:24:32.995170 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" event={"ID":"b9309ebb-034c-47a1-9328-62fda6feabbd","Type":"ContainerStarted","Data":"4c493a11eab89aefcb6bfd875f74f37469eb8583621ec363e3197c1f05f4cf83"} Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.000954 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6" event={"ID":"a67dd2fd-8463-4887-94b7-405df03c5c0a","Type":"ContainerStarted","Data":"0025c9036568250e9cd5742f8d6f745265094704dd04fca029716a2aa49bcab7"} Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.004645 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" event={"ID":"8fa1edf3-e0a6-4d1a-aa61-172397ca736b","Type":"ContainerStarted","Data":"81c0b5887294570dc667b6cb89b8b18fd90478ace13816a814b9986ebe7391b6"} Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.004813 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" event={"ID":"8fa1edf3-e0a6-4d1a-aa61-172397ca736b","Type":"ContainerStarted","Data":"b2c0143f0dfa31948fd52363889541c1e023e20062ecf14fdc66c741240f8954"} Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.004893 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.019948 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" event={"ID":"8615180e-fc31-41b2-ad59-5ae2e48af5a2","Type":"ContainerStarted","Data":"aec701259e552f23dfcf4e9cf051bfbdb52a72d9c0db034b350a2330451e632f"} Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.021022 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.028565 4835 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mjg6g container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.028663 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" podUID="8615180e-fc31-41b2-ad59-5ae2e48af5a2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.036482 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" event={"ID":"a800a13f-d2a0-40d3-b6ae-e1a16c4cb6c2","Type":"ContainerStarted","Data":"d3c37d18f88bd6af013df8df81226f02096cf5fd27355056162bc199d4d23fec"} Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.036910 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:33 crc kubenswrapper[4835]: E0201 07:24:33.037963 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:33.537942174 +0000 UTC m=+146.658378608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.061329 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fbdw8" event={"ID":"1d5a72cc-b727-4dcf-85cd-d039dc785b65","Type":"ContainerStarted","Data":"ffe4a8bb29d3c3f4ed655ac9b373fd3387c5fd2915632b7f22de512845fe8612"} Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.061606 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fbdw8" event={"ID":"1d5a72cc-b727-4dcf-85cd-d039dc785b65","Type":"ContainerStarted","Data":"ce0b94d7bd2530d394066be64acc80826a336507fdc5fabcf3b53f87467f7666"} Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.064153 4835 patch_prober.go:28] interesting pod/downloads-7954f5f757-k8v8n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.064185 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k8v8n" podUID="79c369eb-e17d-4a32-9167-934aa23fd4fc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.064447 4835 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-p5fjs container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.064552 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" podUID="60b0275a-57b6-482d-b046-ffd270801add" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.064712 4835 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-q45cc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" start-of-body= Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.064769 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" podUID="1adf70cf-02dc-4c30-9c35-6507314a4fa8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.088728 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-shvm4" podStartSLOduration=7.088708882 podStartE2EDuration="7.088708882s" podCreationTimestamp="2026-02-01 07:24:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:33.031271868 +0000 UTC m=+146.151708302" watchObservedRunningTime="2026-02-01 07:24:33.088708882 +0000 UTC m=+146.209145316" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.090558 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-gmr7g" podStartSLOduration=7.090552111 podStartE2EDuration="7.090552111s" podCreationTimestamp="2026-02-01 07:24:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:33.088645611 +0000 UTC m=+146.209082045" watchObservedRunningTime="2026-02-01 07:24:33.090552111 +0000 UTC m=+146.210988545" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.139461 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:33 crc kubenswrapper[4835]: E0201 07:24:33.144138 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:33.644125363 +0000 UTC m=+146.764561797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.149617 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-g4r2s" podStartSLOduration=121.149602238 podStartE2EDuration="2m1.149602238s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:33.123867879 +0000 UTC m=+146.244304313" watchObservedRunningTime="2026-02-01 07:24:33.149602238 +0000 UTC m=+146.270038672" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.211497 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-xgqrp" podStartSLOduration=121.211479669 podStartE2EDuration="2m1.211479669s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:33.150809129 +0000 UTC m=+146.271245583" watchObservedRunningTime="2026-02-01 07:24:33.211479669 +0000 UTC m=+146.331916103" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.237093 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" podStartSLOduration=121.237072993 podStartE2EDuration="2m1.237072993s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:33.210925284 +0000 UTC m=+146.331361718" watchObservedRunningTime="2026-02-01 07:24:33.237072993 +0000 UTC m=+146.357509427" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.243900 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:33 crc kubenswrapper[4835]: E0201 07:24:33.244242 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:33.744227362 +0000 UTC m=+146.864663796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.278631 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dk9xj" podStartSLOduration=121.278617059 podStartE2EDuration="2m1.278617059s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:33.272007125 +0000 UTC m=+146.392443559" watchObservedRunningTime="2026-02-01 07:24:33.278617059 +0000 UTC m=+146.399053493" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.278938 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" podStartSLOduration=120.278934647 podStartE2EDuration="2m0.278934647s" podCreationTimestamp="2026-02-01 07:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:33.239138138 +0000 UTC m=+146.359574562" watchObservedRunningTime="2026-02-01 07:24:33.278934647 +0000 UTC m=+146.399371081" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.298025 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-fbdw8" podStartSLOduration=120.29801007 podStartE2EDuration="2m0.29801007s" podCreationTimestamp="2026-02-01 07:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:33.297293571 +0000 UTC m=+146.417730005" watchObservedRunningTime="2026-02-01 07:24:33.29801007 +0000 UTC m=+146.418446504" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.326034 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" podStartSLOduration=120.326019878 podStartE2EDuration="2m0.326019878s" podCreationTimestamp="2026-02-01 07:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:33.324829127 +0000 UTC m=+146.445265571" watchObservedRunningTime="2026-02-01 07:24:33.326019878 +0000 UTC m=+146.446456322" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.346094 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:33 crc kubenswrapper[4835]: E0201 07:24:33.346529 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:33.846514739 +0000 UTC m=+146.966951173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.395010 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ngjw6" podStartSLOduration=121.394994527 podStartE2EDuration="2m1.394994527s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:33.364983635 +0000 UTC m=+146.485420069" watchObservedRunningTime="2026-02-01 07:24:33.394994527 +0000 UTC m=+146.515430961" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.396918 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-84rg2" podStartSLOduration=121.396912987 podStartE2EDuration="2m1.396912987s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:33.394072752 +0000 UTC m=+146.514509186" watchObservedRunningTime="2026-02-01 07:24:33.396912987 +0000 UTC m=+146.517349421" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.447000 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:33 crc kubenswrapper[4835]: E0201 07:24:33.447181 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:33.947155212 +0000 UTC m=+147.067591646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.447382 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:33 crc kubenswrapper[4835]: E0201 07:24:33.447741 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:33.947731237 +0000 UTC m=+147.068167671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.548752 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:33 crc kubenswrapper[4835]: E0201 07:24:33.548950 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.048923275 +0000 UTC m=+147.169359709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.550050 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:33 crc kubenswrapper[4835]: E0201 07:24:33.550343 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.050332712 +0000 UTC m=+147.170769146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.651142 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:33 crc kubenswrapper[4835]: E0201 07:24:33.651363 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.151329705 +0000 UTC m=+147.271766159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.651649 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:33 crc kubenswrapper[4835]: E0201 07:24:33.651970 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.151961792 +0000 UTC m=+147.272398226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.725365 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.725727 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.752663 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:33 crc kubenswrapper[4835]: E0201 07:24:33.753101 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.253086688 +0000 UTC m=+147.373523122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.854077 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:33 crc kubenswrapper[4835]: E0201 07:24:33.854381 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.354367578 +0000 UTC m=+147.474804012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.897790 4835 patch_prober.go:28] interesting pod/router-default-5444994796-sdz4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 07:24:33 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 01 07:24:33 crc kubenswrapper[4835]: [+]process-running ok Feb 01 07:24:33 crc kubenswrapper[4835]: healthz check failed Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.897837 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sdz4h" podUID="6f01f600-cee2-4257-9c5f-a0b7edcd7a9d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 07:24:33 crc kubenswrapper[4835]: I0201 07:24:33.955521 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:33 crc kubenswrapper[4835]: E0201 07:24:33.955806 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.455790962 +0000 UTC m=+147.576227396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.057026 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:34 crc kubenswrapper[4835]: E0201 07:24:34.057557 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.557545674 +0000 UTC m=+147.677982108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.066730 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" event={"ID":"ac6d201a-b05d-47ab-b71f-0859b88f0024","Type":"ContainerStarted","Data":"b2e5af8ce77456d4131584133f8d5c138df4117bb9cb6d6c90baaf9d40c354e0"} Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.066783 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" event={"ID":"ac6d201a-b05d-47ab-b71f-0859b88f0024","Type":"ContainerStarted","Data":"00737887f065d25ed96621f33158323f8d0660e440b7ef398fe15c9a4089207a"} Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.066793 4835 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mjg6g container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.066827 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" podUID="8615180e-fc31-41b2-ad59-5ae2e48af5a2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.089735 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7ngw7" Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.141316 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" podStartSLOduration=121.141297932 podStartE2EDuration="2m1.141297932s" podCreationTimestamp="2026-02-01 07:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:33.438293478 +0000 UTC m=+146.558729912" watchObservedRunningTime="2026-02-01 07:24:34.141297932 +0000 UTC m=+147.261734376" Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.159913 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:34 crc kubenswrapper[4835]: E0201 07:24:34.161302 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.661288049 +0000 UTC m=+147.781724483 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.183300 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-q45cc" Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.217595 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.221600 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-k4l2m" Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.266707 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:34 crc kubenswrapper[4835]: E0201 07:24:34.267104 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.767089878 +0000 UTC m=+147.887526302 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.368114 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:34 crc kubenswrapper[4835]: E0201 07:24:34.368277 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.868253465 +0000 UTC m=+147.988689889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.368351 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:34 crc kubenswrapper[4835]: E0201 07:24:34.368862 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.868844661 +0000 UTC m=+147.989281155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.469315 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:34 crc kubenswrapper[4835]: E0201 07:24:34.469511 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.969484885 +0000 UTC m=+148.089921319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.469567 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:34 crc kubenswrapper[4835]: E0201 07:24:34.469869 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:34.969856824 +0000 UTC m=+148.090293258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.556864 4835 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.570608 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:34 crc kubenswrapper[4835]: E0201 07:24:34.570790 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-01 07:24:35.070767905 +0000 UTC m=+148.191204339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.671868 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:34 crc kubenswrapper[4835]: E0201 07:24:34.672154 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-01 07:24:35.172141386 +0000 UTC m=+148.292577820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-66fqg" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.725314 4835 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-01T07:24:34.557057523Z","Handler":null,"Name":""} Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.743705 4835 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.743741 4835 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.772535 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.799682 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.873982 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.898855 4835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.898895 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.899626 4835 patch_prober.go:28] interesting pod/router-default-5444994796-sdz4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 07:24:34 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 01 07:24:34 crc kubenswrapper[4835]: [+]process-running ok Feb 01 07:24:34 crc kubenswrapper[4835]: healthz check failed Feb 01 07:24:34 crc kubenswrapper[4835]: I0201 07:24:34.899663 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sdz4h" podUID="6f01f600-cee2-4257-9c5f-a0b7edcd7a9d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.072850 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" event={"ID":"ac6d201a-b05d-47ab-b71f-0859b88f0024","Type":"ContainerStarted","Data":"90a4a04442a02eaea193885caae199909448634bbf117c7dbc60ef00386e3102"} Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.072902 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" event={"ID":"ac6d201a-b05d-47ab-b71f-0859b88f0024","Type":"ContainerStarted","Data":"a401e08e1ac424cd796677f071dc26d5b98c279278d6c953705554f700c3f702"} Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.073327 4835 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mjg6g container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.073364 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" podUID="8615180e-fc31-41b2-ad59-5ae2e48af5a2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.084750 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-66fqg\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.090704 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j9pkf" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.103284 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-xwsnp" podStartSLOduration=9.103265772 podStartE2EDuration="9.103265772s" podCreationTimestamp="2026-02-01 07:24:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:35.100760656 +0000 UTC m=+148.221197090" watchObservedRunningTime="2026-02-01 07:24:35.103265772 +0000 UTC m=+148.223702206" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.169854 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.374183 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t677t"] Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.375045 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t677t" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.381938 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.384447 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t677t"] Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.399108 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-66fqg"] Feb 01 07:24:35 crc kubenswrapper[4835]: W0201 07:24:35.403559 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac521dca_2154_40bb_bbdb_a22e3d6abd72.slice/crio-7009647035bcb9b3d9a9385f910f574abe92ca7bc6f2836a8743b47eb765ed4a WatchSource:0}: Error finding container 7009647035bcb9b3d9a9385f910f574abe92ca7bc6f2836a8743b47eb765ed4a: Status 404 returned error can't find the container with id 7009647035bcb9b3d9a9385f910f574abe92ca7bc6f2836a8743b47eb765ed4a Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.482116 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/835b2622-9047-4e3a-b019-6f15c5fd4566-catalog-content\") pod \"community-operators-t677t\" (UID: \"835b2622-9047-4e3a-b019-6f15c5fd4566\") " pod="openshift-marketplace/community-operators-t677t" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.482165 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k72t5\" (UniqueName: \"kubernetes.io/projected/835b2622-9047-4e3a-b019-6f15c5fd4566-kube-api-access-k72t5\") pod \"community-operators-t677t\" (UID: \"835b2622-9047-4e3a-b019-6f15c5fd4566\") " pod="openshift-marketplace/community-operators-t677t" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.482204 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/835b2622-9047-4e3a-b019-6f15c5fd4566-utilities\") pod \"community-operators-t677t\" (UID: \"835b2622-9047-4e3a-b019-6f15c5fd4566\") " pod="openshift-marketplace/community-operators-t677t" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.577472 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.582350 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zbfbl"] Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.583357 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.583420 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/835b2622-9047-4e3a-b019-6f15c5fd4566-utilities\") pod \"community-operators-t677t\" (UID: \"835b2622-9047-4e3a-b019-6f15c5fd4566\") " pod="openshift-marketplace/community-operators-t677t" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.583475 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.583521 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.583555 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.583575 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/835b2622-9047-4e3a-b019-6f15c5fd4566-catalog-content\") pod \"community-operators-t677t\" (UID: \"835b2622-9047-4e3a-b019-6f15c5fd4566\") " pod="openshift-marketplace/community-operators-t677t" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.583592 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k72t5\" (UniqueName: \"kubernetes.io/projected/835b2622-9047-4e3a-b019-6f15c5fd4566-kube-api-access-k72t5\") pod \"community-operators-t677t\" (UID: \"835b2622-9047-4e3a-b019-6f15c5fd4566\") " pod="openshift-marketplace/community-operators-t677t" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.583822 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.584259 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/835b2622-9047-4e3a-b019-6f15c5fd4566-utilities\") pod \"community-operators-t677t\" (UID: \"835b2622-9047-4e3a-b019-6f15c5fd4566\") " pod="openshift-marketplace/community-operators-t677t" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.584874 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/835b2622-9047-4e3a-b019-6f15c5fd4566-catalog-content\") pod \"community-operators-t677t\" (UID: \"835b2622-9047-4e3a-b019-6f15c5fd4566\") " pod="openshift-marketplace/community-operators-t677t" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.586048 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.588479 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.591178 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.591354 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.593511 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.602266 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.603320 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zbfbl"] Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.609699 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k72t5\" (UniqueName: \"kubernetes.io/projected/835b2622-9047-4e3a-b019-6f15c5fd4566-kube-api-access-k72t5\") pod \"community-operators-t677t\" (UID: \"835b2622-9047-4e3a-b019-6f15c5fd4566\") " pod="openshift-marketplace/community-operators-t677t" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.685016 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-utilities\") pod \"certified-operators-zbfbl\" (UID: \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\") " pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.685078 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-catalog-content\") pod \"certified-operators-zbfbl\" (UID: \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\") " pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.685178 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh6bn\" (UniqueName: \"kubernetes.io/projected/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-kube-api-access-wh6bn\") pod \"certified-operators-zbfbl\" (UID: \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\") " pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.689917 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.691940 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t677t" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.777893 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7n8wh"] Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.781574 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.787073 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-utilities\") pod \"certified-operators-zbfbl\" (UID: \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\") " pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.787113 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-catalog-content\") pod \"certified-operators-zbfbl\" (UID: \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\") " pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.787132 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh6bn\" (UniqueName: \"kubernetes.io/projected/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-kube-api-access-wh6bn\") pod \"certified-operators-zbfbl\" (UID: \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\") " pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.787712 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-catalog-content\") pod \"certified-operators-zbfbl\" (UID: \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\") " pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.787790 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-utilities\") pod \"certified-operators-zbfbl\" (UID: \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\") " pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.800118 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7n8wh"] Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.811758 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh6bn\" (UniqueName: \"kubernetes.io/projected/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-kube-api-access-wh6bn\") pod \"certified-operators-zbfbl\" (UID: \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\") " pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.888144 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f562492e-dbf9-440e-978a-603956fc464e-utilities\") pod \"community-operators-7n8wh\" (UID: \"f562492e-dbf9-440e-978a-603956fc464e\") " pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.888179 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7jhx\" (UniqueName: \"kubernetes.io/projected/f562492e-dbf9-440e-978a-603956fc464e-kube-api-access-r7jhx\") pod \"community-operators-7n8wh\" (UID: \"f562492e-dbf9-440e-978a-603956fc464e\") " pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.888224 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f562492e-dbf9-440e-978a-603956fc464e-catalog-content\") pod \"community-operators-7n8wh\" (UID: \"f562492e-dbf9-440e-978a-603956fc464e\") " pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.889589 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.899086 4835 patch_prober.go:28] interesting pod/router-default-5444994796-sdz4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 07:24:35 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 01 07:24:35 crc kubenswrapper[4835]: [+]process-running ok Feb 01 07:24:35 crc kubenswrapper[4835]: healthz check failed Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.899147 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sdz4h" podUID="6f01f600-cee2-4257-9c5f-a0b7edcd7a9d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.900439 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:24:35 crc kubenswrapper[4835]: W0201 07:24:35.942351 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-09b6d4ce2b3030e6207a49ca9214b7fd6cd091887ee3157104072e82fd9a8a10 WatchSource:0}: Error finding container 09b6d4ce2b3030e6207a49ca9214b7fd6cd091887ee3157104072e82fd9a8a10: Status 404 returned error can't find the container with id 09b6d4ce2b3030e6207a49ca9214b7fd6cd091887ee3157104072e82fd9a8a10 Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.955461 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t677t"] Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.972732 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ng2z7"] Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.973856 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.989036 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f562492e-dbf9-440e-978a-603956fc464e-utilities\") pod \"community-operators-7n8wh\" (UID: \"f562492e-dbf9-440e-978a-603956fc464e\") " pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.989097 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7jhx\" (UniqueName: \"kubernetes.io/projected/f562492e-dbf9-440e-978a-603956fc464e-kube-api-access-r7jhx\") pod \"community-operators-7n8wh\" (UID: \"f562492e-dbf9-440e-978a-603956fc464e\") " pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.989157 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f562492e-dbf9-440e-978a-603956fc464e-catalog-content\") pod \"community-operators-7n8wh\" (UID: \"f562492e-dbf9-440e-978a-603956fc464e\") " pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.989828 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f562492e-dbf9-440e-978a-603956fc464e-catalog-content\") pod \"community-operators-7n8wh\" (UID: \"f562492e-dbf9-440e-978a-603956fc464e\") " pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.990106 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f562492e-dbf9-440e-978a-603956fc464e-utilities\") pod \"community-operators-7n8wh\" (UID: \"f562492e-dbf9-440e-978a-603956fc464e\") " pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:24:35 crc kubenswrapper[4835]: I0201 07:24:35.996054 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ng2z7"] Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.011488 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7jhx\" (UniqueName: \"kubernetes.io/projected/f562492e-dbf9-440e-978a-603956fc464e-kube-api-access-r7jhx\") pod \"community-operators-7n8wh\" (UID: \"f562492e-dbf9-440e-978a-603956fc464e\") " pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.090929 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c7w5\" (UniqueName: \"kubernetes.io/projected/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-kube-api-access-5c7w5\") pod \"certified-operators-ng2z7\" (UID: \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\") " pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.090969 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-utilities\") pod \"certified-operators-ng2z7\" (UID: \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\") " pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.091051 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-catalog-content\") pod \"certified-operators-ng2z7\" (UID: \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\") " pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.093266 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" event={"ID":"ac521dca-2154-40bb-bbdb-a22e3d6abd72","Type":"ContainerStarted","Data":"3f33f19419e62411bac7a2082cf36c839014695310e5de008fdbd44a3e0eba81"} Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.093303 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" event={"ID":"ac521dca-2154-40bb-bbdb-a22e3d6abd72","Type":"ContainerStarted","Data":"7009647035bcb9b3d9a9385f910f574abe92ca7bc6f2836a8743b47eb765ed4a"} Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.094146 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.101776 4835 generic.go:334] "Generic (PLEG): container finished" podID="137b200e-5dcd-43c9-82e2-332071d84cb0" containerID="98c793df94b793188e86124f6ff1a8161f18d725c6666c0e72eb3d6113d10246" exitCode=0 Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.101832 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" event={"ID":"137b200e-5dcd-43c9-82e2-332071d84cb0","Type":"ContainerDied","Data":"98c793df94b793188e86124f6ff1a8161f18d725c6666c0e72eb3d6113d10246"} Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.113190 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" podStartSLOduration=124.113169708 podStartE2EDuration="2m4.113169708s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:24:36.112357476 +0000 UTC m=+149.232793910" watchObservedRunningTime="2026-02-01 07:24:36.113169708 +0000 UTC m=+149.233606142" Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.115386 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.119490 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"09b6d4ce2b3030e6207a49ca9214b7fd6cd091887ee3157104072e82fd9a8a10"} Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.120318 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t677t" event={"ID":"835b2622-9047-4e3a-b019-6f15c5fd4566","Type":"ContainerStarted","Data":"8633807aa4c1b4534aedf9236769294f25ed6ac597e2c0fda34cf924f7b62039"} Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.121855 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a3c2f7856c30944518677107649ae5b93411db83b03c28be8ced56e33c6a709e"} Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.121873 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9962c288a34cb7f1e80f9e6be04f9e3b5f0b287b4fdabd815689c3d63c7dfab1"} Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.191954 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-catalog-content\") pod \"certified-operators-ng2z7\" (UID: \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\") " pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.192010 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c7w5\" (UniqueName: \"kubernetes.io/projected/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-kube-api-access-5c7w5\") pod \"certified-operators-ng2z7\" (UID: \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\") " pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.192038 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-utilities\") pod \"certified-operators-ng2z7\" (UID: \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\") " pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.193795 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-catalog-content\") pod \"certified-operators-ng2z7\" (UID: \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\") " pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.196609 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-utilities\") pod \"certified-operators-ng2z7\" (UID: \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\") " pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.227649 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c7w5\" (UniqueName: \"kubernetes.io/projected/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-kube-api-access-5c7w5\") pod \"certified-operators-ng2z7\" (UID: \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\") " pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:24:36 crc kubenswrapper[4835]: W0201 07:24:36.236572 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-043d6986445c5e721ecb76bb07f1db5d9cf8674bd03c2971a930672817bda3c6 WatchSource:0}: Error finding container 043d6986445c5e721ecb76bb07f1db5d9cf8674bd03c2971a930672817bda3c6: Status 404 returned error can't find the container with id 043d6986445c5e721ecb76bb07f1db5d9cf8674bd03c2971a930672817bda3c6 Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.268490 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zbfbl"] Feb 01 07:24:36 crc kubenswrapper[4835]: W0201 07:24:36.298734 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a177b30_3240_49d8_b0c5_b74f8e8f4c7e.slice/crio-34d744c0f2118911ec3770b8a37e279293e3d0075191d345f7ef2f24b56383a6 WatchSource:0}: Error finding container 34d744c0f2118911ec3770b8a37e279293e3d0075191d345f7ef2f24b56383a6: Status 404 returned error can't find the container with id 34d744c0f2118911ec3770b8a37e279293e3d0075191d345f7ef2f24b56383a6 Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.314567 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.420036 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7n8wh"] Feb 01 07:24:36 crc kubenswrapper[4835]: W0201 07:24:36.448756 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf562492e_dbf9_440e_978a_603956fc464e.slice/crio-86a332e6785f7fd31c68a8369c40ba5c5a557e81b2b71995f91b8e3ba6b2e274 WatchSource:0}: Error finding container 86a332e6785f7fd31c68a8369c40ba5c5a557e81b2b71995f91b8e3ba6b2e274: Status 404 returned error can't find the container with id 86a332e6785f7fd31c68a8369c40ba5c5a557e81b2b71995f91b8e3ba6b2e274 Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.734772 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ng2z7"] Feb 01 07:24:36 crc kubenswrapper[4835]: W0201 07:24:36.744384 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3a136e2_3caa_4ed0_960a_6b6a0fdef39e.slice/crio-c56ac053edf9fdbd97a44ab1c01dec3b54c9bd91c581423e5a21d7786e48591e WatchSource:0}: Error finding container c56ac053edf9fdbd97a44ab1c01dec3b54c9bd91c581423e5a21d7786e48591e: Status 404 returned error can't find the container with id c56ac053edf9fdbd97a44ab1c01dec3b54c9bd91c581423e5a21d7786e48591e Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.896655 4835 patch_prober.go:28] interesting pod/router-default-5444994796-sdz4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 07:24:36 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 01 07:24:36 crc kubenswrapper[4835]: [+]process-running ok Feb 01 07:24:36 crc kubenswrapper[4835]: healthz check failed Feb 01 07:24:36 crc kubenswrapper[4835]: I0201 07:24:36.896728 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sdz4h" podUID="6f01f600-cee2-4257-9c5f-a0b7edcd7a9d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.081955 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.087160 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-bztv4" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.130702 4835 generic.go:334] "Generic (PLEG): container finished" podID="f562492e-dbf9-440e-978a-603956fc464e" containerID="c6c784d52b5c200fbc9c5b7fd427e7a9a01fe58abdfbe2cd4a7fa8dbd1de744a" exitCode=0 Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.130768 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7n8wh" event={"ID":"f562492e-dbf9-440e-978a-603956fc464e","Type":"ContainerDied","Data":"c6c784d52b5c200fbc9c5b7fd427e7a9a01fe58abdfbe2cd4a7fa8dbd1de744a"} Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.130794 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7n8wh" event={"ID":"f562492e-dbf9-440e-978a-603956fc464e","Type":"ContainerStarted","Data":"86a332e6785f7fd31c68a8369c40ba5c5a557e81b2b71995f91b8e3ba6b2e274"} Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.132672 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.133023 4835 generic.go:334] "Generic (PLEG): container finished" podID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" containerID="a6b8f48d9df6c1d8f0734a3ca0cfbfd4aeefeefe31ab96acc4f52f2976e7751f" exitCode=0 Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.133110 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ng2z7" event={"ID":"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e","Type":"ContainerDied","Data":"a6b8f48d9df6c1d8f0734a3ca0cfbfd4aeefeefe31ab96acc4f52f2976e7751f"} Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.133134 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ng2z7" event={"ID":"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e","Type":"ContainerStarted","Data":"c56ac053edf9fdbd97a44ab1c01dec3b54c9bd91c581423e5a21d7786e48591e"} Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.136176 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"2d2fb313fc51758cd94b0a57062913771b944df11c6ef5c890a7119fbe4f88ac"} Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.136382 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.137447 4835 generic.go:334] "Generic (PLEG): container finished" podID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" containerID="eac60a2bcfc7a27f8cce064694d441e59039265b959d26823af533d85c7dcf10" exitCode=0 Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.137496 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zbfbl" event={"ID":"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e","Type":"ContainerDied","Data":"eac60a2bcfc7a27f8cce064694d441e59039265b959d26823af533d85c7dcf10"} Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.137514 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zbfbl" event={"ID":"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e","Type":"ContainerStarted","Data":"34d744c0f2118911ec3770b8a37e279293e3d0075191d345f7ef2f24b56383a6"} Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.139716 4835 generic.go:334] "Generic (PLEG): container finished" podID="835b2622-9047-4e3a-b019-6f15c5fd4566" containerID="7270b81f0145b4123ee2f475f3f90b8aa11e59eef5e948db9ab2c46452e1838a" exitCode=0 Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.139776 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t677t" event={"ID":"835b2622-9047-4e3a-b019-6f15c5fd4566","Type":"ContainerDied","Data":"7270b81f0145b4123ee2f475f3f90b8aa11e59eef5e948db9ab2c46452e1838a"} Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.141669 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"da61315c3bbb176740f6113e98cef0959c4e498062bf542699f7d7a572634351"} Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.141697 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"043d6986445c5e721ecb76bb07f1db5d9cf8674bd03c2971a930672817bda3c6"} Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.452003 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.510565 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/137b200e-5dcd-43c9-82e2-332071d84cb0-config-volume\") pod \"137b200e-5dcd-43c9-82e2-332071d84cb0\" (UID: \"137b200e-5dcd-43c9-82e2-332071d84cb0\") " Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.510624 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/137b200e-5dcd-43c9-82e2-332071d84cb0-secret-volume\") pod \"137b200e-5dcd-43c9-82e2-332071d84cb0\" (UID: \"137b200e-5dcd-43c9-82e2-332071d84cb0\") " Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.510646 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49g4h\" (UniqueName: \"kubernetes.io/projected/137b200e-5dcd-43c9-82e2-332071d84cb0-kube-api-access-49g4h\") pod \"137b200e-5dcd-43c9-82e2-332071d84cb0\" (UID: \"137b200e-5dcd-43c9-82e2-332071d84cb0\") " Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.513019 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/137b200e-5dcd-43c9-82e2-332071d84cb0-config-volume" (OuterVolumeSpecName: "config-volume") pod "137b200e-5dcd-43c9-82e2-332071d84cb0" (UID: "137b200e-5dcd-43c9-82e2-332071d84cb0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.520227 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/137b200e-5dcd-43c9-82e2-332071d84cb0-kube-api-access-49g4h" (OuterVolumeSpecName: "kube-api-access-49g4h") pod "137b200e-5dcd-43c9-82e2-332071d84cb0" (UID: "137b200e-5dcd-43c9-82e2-332071d84cb0"). InnerVolumeSpecName "kube-api-access-49g4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.520493 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/137b200e-5dcd-43c9-82e2-332071d84cb0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "137b200e-5dcd-43c9-82e2-332071d84cb0" (UID: "137b200e-5dcd-43c9-82e2-332071d84cb0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.572734 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4xx49"] Feb 01 07:24:37 crc kubenswrapper[4835]: E0201 07:24:37.572936 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="137b200e-5dcd-43c9-82e2-332071d84cb0" containerName="collect-profiles" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.572960 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="137b200e-5dcd-43c9-82e2-332071d84cb0" containerName="collect-profiles" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.573078 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="137b200e-5dcd-43c9-82e2-332071d84cb0" containerName="collect-profiles" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.573797 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.576326 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.584667 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xx49"] Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.615813 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602186bd-e71a-4ce1-ad39-c56495e815c3-catalog-content\") pod \"redhat-marketplace-4xx49\" (UID: \"602186bd-e71a-4ce1-ad39-c56495e815c3\") " pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.615891 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602186bd-e71a-4ce1-ad39-c56495e815c3-utilities\") pod \"redhat-marketplace-4xx49\" (UID: \"602186bd-e71a-4ce1-ad39-c56495e815c3\") " pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.615952 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcvsr\" (UniqueName: \"kubernetes.io/projected/602186bd-e71a-4ce1-ad39-c56495e815c3-kube-api-access-fcvsr\") pod \"redhat-marketplace-4xx49\" (UID: \"602186bd-e71a-4ce1-ad39-c56495e815c3\") " pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.616171 4835 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/137b200e-5dcd-43c9-82e2-332071d84cb0-config-volume\") on node \"crc\" DevicePath \"\"" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.616188 4835 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/137b200e-5dcd-43c9-82e2-332071d84cb0-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.616198 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49g4h\" (UniqueName: \"kubernetes.io/projected/137b200e-5dcd-43c9-82e2-332071d84cb0-kube-api-access-49g4h\") on node \"crc\" DevicePath \"\"" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.716857 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602186bd-e71a-4ce1-ad39-c56495e815c3-utilities\") pod \"redhat-marketplace-4xx49\" (UID: \"602186bd-e71a-4ce1-ad39-c56495e815c3\") " pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.716937 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcvsr\" (UniqueName: \"kubernetes.io/projected/602186bd-e71a-4ce1-ad39-c56495e815c3-kube-api-access-fcvsr\") pod \"redhat-marketplace-4xx49\" (UID: \"602186bd-e71a-4ce1-ad39-c56495e815c3\") " pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.716977 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602186bd-e71a-4ce1-ad39-c56495e815c3-catalog-content\") pod \"redhat-marketplace-4xx49\" (UID: \"602186bd-e71a-4ce1-ad39-c56495e815c3\") " pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.717379 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602186bd-e71a-4ce1-ad39-c56495e815c3-utilities\") pod \"redhat-marketplace-4xx49\" (UID: \"602186bd-e71a-4ce1-ad39-c56495e815c3\") " pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.717436 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602186bd-e71a-4ce1-ad39-c56495e815c3-catalog-content\") pod \"redhat-marketplace-4xx49\" (UID: \"602186bd-e71a-4ce1-ad39-c56495e815c3\") " pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.737576 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcvsr\" (UniqueName: \"kubernetes.io/projected/602186bd-e71a-4ce1-ad39-c56495e815c3-kube-api-access-fcvsr\") pod \"redhat-marketplace-4xx49\" (UID: \"602186bd-e71a-4ce1-ad39-c56495e815c3\") " pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.887164 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.897224 4835 patch_prober.go:28] interesting pod/router-default-5444994796-sdz4h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 01 07:24:37 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Feb 01 07:24:37 crc kubenswrapper[4835]: [+]process-running ok Feb 01 07:24:37 crc kubenswrapper[4835]: healthz check failed Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.897320 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sdz4h" podUID="6f01f600-cee2-4257-9c5f-a0b7edcd7a9d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.973304 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tlf77"] Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.974678 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:24:37 crc kubenswrapper[4835]: I0201 07:24:37.990374 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlf77"] Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.020763 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b287031-510c-410c-ade6-c2cf7a48e363-catalog-content\") pod \"redhat-marketplace-tlf77\" (UID: \"9b287031-510c-410c-ade6-c2cf7a48e363\") " pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.020811 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blpp8\" (UniqueName: \"kubernetes.io/projected/9b287031-510c-410c-ade6-c2cf7a48e363-kube-api-access-blpp8\") pod \"redhat-marketplace-tlf77\" (UID: \"9b287031-510c-410c-ade6-c2cf7a48e363\") " pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.020989 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b287031-510c-410c-ade6-c2cf7a48e363-utilities\") pod \"redhat-marketplace-tlf77\" (UID: \"9b287031-510c-410c-ade6-c2cf7a48e363\") " pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.123454 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b287031-510c-410c-ade6-c2cf7a48e363-catalog-content\") pod \"redhat-marketplace-tlf77\" (UID: \"9b287031-510c-410c-ade6-c2cf7a48e363\") " pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.123787 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blpp8\" (UniqueName: \"kubernetes.io/projected/9b287031-510c-410c-ade6-c2cf7a48e363-kube-api-access-blpp8\") pod \"redhat-marketplace-tlf77\" (UID: \"9b287031-510c-410c-ade6-c2cf7a48e363\") " pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.123842 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b287031-510c-410c-ade6-c2cf7a48e363-utilities\") pod \"redhat-marketplace-tlf77\" (UID: \"9b287031-510c-410c-ade6-c2cf7a48e363\") " pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.123984 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b287031-510c-410c-ade6-c2cf7a48e363-catalog-content\") pod \"redhat-marketplace-tlf77\" (UID: \"9b287031-510c-410c-ade6-c2cf7a48e363\") " pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.124193 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b287031-510c-410c-ade6-c2cf7a48e363-utilities\") pod \"redhat-marketplace-tlf77\" (UID: \"9b287031-510c-410c-ade6-c2cf7a48e363\") " pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.125203 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xx49"] Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.144983 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blpp8\" (UniqueName: \"kubernetes.io/projected/9b287031-510c-410c-ade6-c2cf7a48e363-kube-api-access-blpp8\") pod \"redhat-marketplace-tlf77\" (UID: \"9b287031-510c-410c-ade6-c2cf7a48e363\") " pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.151402 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" event={"ID":"137b200e-5dcd-43c9-82e2-332071d84cb0","Type":"ContainerDied","Data":"42603d073e4ca627863068ad7515b8856291ec8428ed3ebba7f5fa565c3a76d5"} Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.151457 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42603d073e4ca627863068ad7515b8856291ec8428ed3ebba7f5fa565c3a76d5" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.151518 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.165100 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.165155 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.178331 4835 patch_prober.go:28] interesting pod/console-f9d7485db-8hgqx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.179003 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-8hgqx" podUID="9154a093-1841-44f5-a71d-e42f5c19dfba" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.318349 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.578979 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s7hk7"] Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.580478 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.583946 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.590838 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s7hk7"] Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.629327 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-catalog-content\") pod \"redhat-operators-s7hk7\" (UID: \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\") " pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.629370 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-utilities\") pod \"redhat-operators-s7hk7\" (UID: \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\") " pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.629403 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97wl9\" (UniqueName: \"kubernetes.io/projected/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-kube-api-access-97wl9\") pod \"redhat-operators-s7hk7\" (UID: \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\") " pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.730400 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-catalog-content\") pod \"redhat-operators-s7hk7\" (UID: \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\") " pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.730677 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-utilities\") pod \"redhat-operators-s7hk7\" (UID: \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\") " pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.730708 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97wl9\" (UniqueName: \"kubernetes.io/projected/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-kube-api-access-97wl9\") pod \"redhat-operators-s7hk7\" (UID: \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\") " pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.731488 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-catalog-content\") pod \"redhat-operators-s7hk7\" (UID: \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\") " pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.731720 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-utilities\") pod \"redhat-operators-s7hk7\" (UID: \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\") " pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.751829 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97wl9\" (UniqueName: \"kubernetes.io/projected/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-kube-api-access-97wl9\") pod \"redhat-operators-s7hk7\" (UID: \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\") " pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.772142 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlf77"] Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.893128 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.900040 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.907167 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.973039 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k5smh"] Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.974024 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:24:38 crc kubenswrapper[4835]: I0201 07:24:38.979317 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k5smh"] Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.036343 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bggh7\" (UniqueName: \"kubernetes.io/projected/cc8c2486-a383-48cb-aefe-1610bc1c534f-kube-api-access-bggh7\") pod \"redhat-operators-k5smh\" (UID: \"cc8c2486-a383-48cb-aefe-1610bc1c534f\") " pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.036394 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc8c2486-a383-48cb-aefe-1610bc1c534f-catalog-content\") pod \"redhat-operators-k5smh\" (UID: \"cc8c2486-a383-48cb-aefe-1610bc1c534f\") " pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.036430 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc8c2486-a383-48cb-aefe-1610bc1c534f-utilities\") pod \"redhat-operators-k5smh\" (UID: \"cc8c2486-a383-48cb-aefe-1610bc1c534f\") " pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.138065 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc8c2486-a383-48cb-aefe-1610bc1c534f-catalog-content\") pod \"redhat-operators-k5smh\" (UID: \"cc8c2486-a383-48cb-aefe-1610bc1c534f\") " pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.138347 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc8c2486-a383-48cb-aefe-1610bc1c534f-utilities\") pod \"redhat-operators-k5smh\" (UID: \"cc8c2486-a383-48cb-aefe-1610bc1c534f\") " pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.138513 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bggh7\" (UniqueName: \"kubernetes.io/projected/cc8c2486-a383-48cb-aefe-1610bc1c534f-kube-api-access-bggh7\") pod \"redhat-operators-k5smh\" (UID: \"cc8c2486-a383-48cb-aefe-1610bc1c534f\") " pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.139094 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc8c2486-a383-48cb-aefe-1610bc1c534f-catalog-content\") pod \"redhat-operators-k5smh\" (UID: \"cc8c2486-a383-48cb-aefe-1610bc1c534f\") " pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.140227 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc8c2486-a383-48cb-aefe-1610bc1c534f-utilities\") pod \"redhat-operators-k5smh\" (UID: \"cc8c2486-a383-48cb-aefe-1610bc1c534f\") " pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.149708 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s7hk7"] Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.159365 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bggh7\" (UniqueName: \"kubernetes.io/projected/cc8c2486-a383-48cb-aefe-1610bc1c534f-kube-api-access-bggh7\") pod \"redhat-operators-k5smh\" (UID: \"cc8c2486-a383-48cb-aefe-1610bc1c534f\") " pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.164057 4835 generic.go:334] "Generic (PLEG): container finished" podID="9b287031-510c-410c-ade6-c2cf7a48e363" containerID="3e7152183a0a34ef6c3548c8ea64fd3446214efac3b2ff0829cdbc79609fea6f" exitCode=0 Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.164147 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlf77" event={"ID":"9b287031-510c-410c-ade6-c2cf7a48e363","Type":"ContainerDied","Data":"3e7152183a0a34ef6c3548c8ea64fd3446214efac3b2ff0829cdbc79609fea6f"} Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.164188 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlf77" event={"ID":"9b287031-510c-410c-ade6-c2cf7a48e363","Type":"ContainerStarted","Data":"50bb18dda4afd99c54bbc442fbcd2bb9c50ee2eb6dac4877186bb6aa56a4b49b"} Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.170183 4835 generic.go:334] "Generic (PLEG): container finished" podID="602186bd-e71a-4ce1-ad39-c56495e815c3" containerID="b14cf051de6ab1294efac8b8b8e42b820cf594040b129fc04b183d93a8efbf57" exitCode=0 Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.170904 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xx49" event={"ID":"602186bd-e71a-4ce1-ad39-c56495e815c3","Type":"ContainerDied","Data":"b14cf051de6ab1294efac8b8b8e42b820cf594040b129fc04b183d93a8efbf57"} Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.171166 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xx49" event={"ID":"602186bd-e71a-4ce1-ad39-c56495e815c3","Type":"ContainerStarted","Data":"dea430e052099dd47c2c324f9a18af947b95755e422272ec8bbff41882bef5e5"} Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.174766 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-sdz4h" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.176088 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.176969 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.180317 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.182626 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.191674 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.239499 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d271f6d-4f2f-40e8-a928-4a88a2439f17-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2d271f6d-4f2f-40e8-a928-4a88a2439f17\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.239610 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d271f6d-4f2f-40e8-a928-4a88a2439f17-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2d271f6d-4f2f-40e8-a928-4a88a2439f17\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.264337 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-p5fjs" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.342046 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d271f6d-4f2f-40e8-a928-4a88a2439f17-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2d271f6d-4f2f-40e8-a928-4a88a2439f17\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.342146 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d271f6d-4f2f-40e8-a928-4a88a2439f17-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2d271f6d-4f2f-40e8-a928-4a88a2439f17\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.342170 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d271f6d-4f2f-40e8-a928-4a88a2439f17-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2d271f6d-4f2f-40e8-a928-4a88a2439f17\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.362692 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d271f6d-4f2f-40e8-a928-4a88a2439f17-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2d271f6d-4f2f-40e8-a928-4a88a2439f17\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.371782 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.435206 4835 patch_prober.go:28] interesting pod/downloads-7954f5f757-k8v8n container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.435234 4835 patch_prober.go:28] interesting pod/downloads-7954f5f757-k8v8n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.435272 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-k8v8n" podUID="79c369eb-e17d-4a32-9167-934aa23fd4fc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.435288 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-k8v8n" podUID="79c369eb-e17d-4a32-9167-934aa23fd4fc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.519528 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.600011 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.767918 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k5smh"] Feb 01 07:24:39 crc kubenswrapper[4835]: W0201 07:24:39.822163 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc8c2486_a383_48cb_aefe_1610bc1c534f.slice/crio-60136c9d9c1fa01ab239559ff4cf41446038fd0cd99c254158238a21609db4a7 WatchSource:0}: Error finding container 60136c9d9c1fa01ab239559ff4cf41446038fd0cd99c254158238a21609db4a7: Status 404 returned error can't find the container with id 60136c9d9c1fa01ab239559ff4cf41446038fd0cd99c254158238a21609db4a7 Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.873697 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.996290 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 01 07:24:39 crc kubenswrapper[4835]: I0201 07:24:39.998265 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.001674 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.001909 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.002120 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.056107 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e745045-e905-4988-b768-a0eac1b93996-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5e745045-e905-4988-b768-a0eac1b93996\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.056184 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e745045-e905-4988-b768-a0eac1b93996-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5e745045-e905-4988-b768-a0eac1b93996\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.157349 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e745045-e905-4988-b768-a0eac1b93996-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5e745045-e905-4988-b768-a0eac1b93996\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.157507 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e745045-e905-4988-b768-a0eac1b93996-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5e745045-e905-4988-b768-a0eac1b93996\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.157617 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e745045-e905-4988-b768-a0eac1b93996-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5e745045-e905-4988-b768-a0eac1b93996\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.191016 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e745045-e905-4988-b768-a0eac1b93996-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5e745045-e905-4988-b768-a0eac1b93996\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.195172 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2d271f6d-4f2f-40e8-a928-4a88a2439f17","Type":"ContainerStarted","Data":"9db8075d36839f752f941857c0522159a35de95a213d37df8437d5339245dd74"} Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.214431 4835 generic.go:334] "Generic (PLEG): container finished" podID="2e2bb332-ae2b-4ef7-90b2-79928bf7407b" containerID="d5974ea84742510757e055f310d0049c446f1e2fe023968cfe1b5034d72af99c" exitCode=0 Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.215670 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7hk7" event={"ID":"2e2bb332-ae2b-4ef7-90b2-79928bf7407b","Type":"ContainerDied","Data":"d5974ea84742510757e055f310d0049c446f1e2fe023968cfe1b5034d72af99c"} Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.215735 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7hk7" event={"ID":"2e2bb332-ae2b-4ef7-90b2-79928bf7407b","Type":"ContainerStarted","Data":"46b5cafa1f07b5021e9e78fc5e6be54cf12c37d6cc9f28c581409330362b0959"} Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.223287 4835 generic.go:334] "Generic (PLEG): container finished" podID="cc8c2486-a383-48cb-aefe-1610bc1c534f" containerID="22e3a2a64402097b404fc7d0b7e471cb7339456b1827cdc5eeb1a1b4417b2cf4" exitCode=0 Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.223418 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5smh" event={"ID":"cc8c2486-a383-48cb-aefe-1610bc1c534f","Type":"ContainerDied","Data":"22e3a2a64402097b404fc7d0b7e471cb7339456b1827cdc5eeb1a1b4417b2cf4"} Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.223478 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5smh" event={"ID":"cc8c2486-a383-48cb-aefe-1610bc1c534f","Type":"ContainerStarted","Data":"60136c9d9c1fa01ab239559ff4cf41446038fd0cd99c254158238a21609db4a7"} Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.373675 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 07:24:40 crc kubenswrapper[4835]: I0201 07:24:40.955351 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 01 07:24:40 crc kubenswrapper[4835]: W0201 07:24:40.967801 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5e745045_e905_4988_b768_a0eac1b93996.slice/crio-ae43453708d322d90bc452d6587e1ac503e1cff1a7f8f218c826f87873757b99 WatchSource:0}: Error finding container ae43453708d322d90bc452d6587e1ac503e1cff1a7f8f218c826f87873757b99: Status 404 returned error can't find the container with id ae43453708d322d90bc452d6587e1ac503e1cff1a7f8f218c826f87873757b99 Feb 01 07:24:41 crc kubenswrapper[4835]: I0201 07:24:41.319205 4835 generic.go:334] "Generic (PLEG): container finished" podID="2d271f6d-4f2f-40e8-a928-4a88a2439f17" containerID="1e6fbac6dced342cacb3472feaa24aa426ebcb958226b83d5ee2d270b6503b08" exitCode=0 Feb 01 07:24:41 crc kubenswrapper[4835]: I0201 07:24:41.319307 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2d271f6d-4f2f-40e8-a928-4a88a2439f17","Type":"ContainerDied","Data":"1e6fbac6dced342cacb3472feaa24aa426ebcb958226b83d5ee2d270b6503b08"} Feb 01 07:24:41 crc kubenswrapper[4835]: I0201 07:24:41.324846 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5e745045-e905-4988-b768-a0eac1b93996","Type":"ContainerStarted","Data":"ae43453708d322d90bc452d6587e1ac503e1cff1a7f8f218c826f87873757b99"} Feb 01 07:24:41 crc kubenswrapper[4835]: I0201 07:24:41.635719 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-gmr7g" Feb 01 07:24:42 crc kubenswrapper[4835]: I0201 07:24:42.340610 4835 generic.go:334] "Generic (PLEG): container finished" podID="5e745045-e905-4988-b768-a0eac1b93996" containerID="679b5df1a39c891464657a42281a4ead0a7d17b93b75b99d7f25af9269ddb1fc" exitCode=0 Feb 01 07:24:42 crc kubenswrapper[4835]: I0201 07:24:42.340825 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5e745045-e905-4988-b768-a0eac1b93996","Type":"ContainerDied","Data":"679b5df1a39c891464657a42281a4ead0a7d17b93b75b99d7f25af9269ddb1fc"} Feb 01 07:24:48 crc kubenswrapper[4835]: I0201 07:24:48.181671 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:48 crc kubenswrapper[4835]: I0201 07:24:48.187347 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-8hgqx" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.255376 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.258479 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.311514 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e745045-e905-4988-b768-a0eac1b93996-kube-api-access\") pod \"5e745045-e905-4988-b768-a0eac1b93996\" (UID: \"5e745045-e905-4988-b768-a0eac1b93996\") " Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.311581 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d271f6d-4f2f-40e8-a928-4a88a2439f17-kube-api-access\") pod \"2d271f6d-4f2f-40e8-a928-4a88a2439f17\" (UID: \"2d271f6d-4f2f-40e8-a928-4a88a2439f17\") " Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.311605 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d271f6d-4f2f-40e8-a928-4a88a2439f17-kubelet-dir\") pod \"2d271f6d-4f2f-40e8-a928-4a88a2439f17\" (UID: \"2d271f6d-4f2f-40e8-a928-4a88a2439f17\") " Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.311641 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e745045-e905-4988-b768-a0eac1b93996-kubelet-dir\") pod \"5e745045-e905-4988-b768-a0eac1b93996\" (UID: \"5e745045-e905-4988-b768-a0eac1b93996\") " Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.311925 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e745045-e905-4988-b768-a0eac1b93996-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5e745045-e905-4988-b768-a0eac1b93996" (UID: "5e745045-e905-4988-b768-a0eac1b93996"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.312145 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d271f6d-4f2f-40e8-a928-4a88a2439f17-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2d271f6d-4f2f-40e8-a928-4a88a2439f17" (UID: "2d271f6d-4f2f-40e8-a928-4a88a2439f17"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.317377 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d271f6d-4f2f-40e8-a928-4a88a2439f17-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2d271f6d-4f2f-40e8-a928-4a88a2439f17" (UID: "2d271f6d-4f2f-40e8-a928-4a88a2439f17"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.319023 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e745045-e905-4988-b768-a0eac1b93996-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5e745045-e905-4988-b768-a0eac1b93996" (UID: "5e745045-e905-4988-b768-a0eac1b93996"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.392143 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2d271f6d-4f2f-40e8-a928-4a88a2439f17","Type":"ContainerDied","Data":"9db8075d36839f752f941857c0522159a35de95a213d37df8437d5339245dd74"} Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.392178 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9db8075d36839f752f941857c0522159a35de95a213d37df8437d5339245dd74" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.392226 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.395567 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5e745045-e905-4988-b768-a0eac1b93996","Type":"ContainerDied","Data":"ae43453708d322d90bc452d6587e1ac503e1cff1a7f8f218c826f87873757b99"} Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.395608 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae43453708d322d90bc452d6587e1ac503e1cff1a7f8f218c826f87873757b99" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.395668 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.416894 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e745045-e905-4988-b768-a0eac1b93996-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.416920 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d271f6d-4f2f-40e8-a928-4a88a2439f17-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.416930 4835 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d271f6d-4f2f-40e8-a928-4a88a2439f17-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.416940 4835 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e745045-e905-4988-b768-a0eac1b93996-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 01 07:24:49 crc kubenswrapper[4835]: I0201 07:24:49.453600 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-k8v8n" Feb 01 07:24:55 crc kubenswrapper[4835]: I0201 07:24:55.181183 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:24:55 crc kubenswrapper[4835]: I0201 07:24:55.191831 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:24:55 crc kubenswrapper[4835]: I0201 07:24:55.191935 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:24:55 crc kubenswrapper[4835]: I0201 07:24:55.713811 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:55 crc kubenswrapper[4835]: I0201 07:24:55.837247 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/caf346fd-1c47-4f35-a5e6-79f7ac8fcafe-metrics-certs\") pod \"network-metrics-daemon-2msm5\" (UID: \"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe\") " pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:24:56 crc kubenswrapper[4835]: I0201 07:24:56.103544 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2msm5" Feb 01 07:25:06 crc kubenswrapper[4835]: E0201 07:25:06.077754 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 01 07:25:06 crc kubenswrapper[4835]: E0201 07:25:06.078698 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k72t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-t677t_openshift-marketplace(835b2622-9047-4e3a-b019-6f15c5fd4566): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 01 07:25:06 crc kubenswrapper[4835]: E0201 07:25:06.079994 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-t677t" podUID="835b2622-9047-4e3a-b019-6f15c5fd4566" Feb 01 07:25:09 crc kubenswrapper[4835]: E0201 07:25:09.115723 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-t677t" podUID="835b2622-9047-4e3a-b019-6f15c5fd4566" Feb 01 07:25:09 crc kubenswrapper[4835]: E0201 07:25:09.195245 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 01 07:25:09 crc kubenswrapper[4835]: E0201 07:25:09.195853 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5c7w5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ng2z7_openshift-marketplace(e3a136e2-3caa-4ed0-960a-6b6a0fdef39e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 01 07:25:09 crc kubenswrapper[4835]: E0201 07:25:09.197300 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ng2z7" podUID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" Feb 01 07:25:09 crc kubenswrapper[4835]: E0201 07:25:09.242202 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 01 07:25:09 crc kubenswrapper[4835]: E0201 07:25:09.242383 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wh6bn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-zbfbl_openshift-marketplace(7a177b30-3240-49d8-b0c5-b74f8e8f4c7e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 01 07:25:09 crc kubenswrapper[4835]: E0201 07:25:09.243615 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-zbfbl" podUID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" Feb 01 07:25:09 crc kubenswrapper[4835]: I0201 07:25:09.575970 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9t7c7" Feb 01 07:25:10 crc kubenswrapper[4835]: E0201 07:25:10.526455 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zbfbl" podUID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" Feb 01 07:25:10 crc kubenswrapper[4835]: E0201 07:25:10.526534 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ng2z7" podUID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" Feb 01 07:25:10 crc kubenswrapper[4835]: E0201 07:25:10.623897 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 01 07:25:10 crc kubenswrapper[4835]: E0201 07:25:10.624250 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fcvsr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4xx49_openshift-marketplace(602186bd-e71a-4ce1-ad39-c56495e815c3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 01 07:25:10 crc kubenswrapper[4835]: E0201 07:25:10.625680 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-4xx49" podUID="602186bd-e71a-4ce1-ad39-c56495e815c3" Feb 01 07:25:10 crc kubenswrapper[4835]: E0201 07:25:10.644727 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 01 07:25:10 crc kubenswrapper[4835]: E0201 07:25:10.645011 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blpp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-tlf77_openshift-marketplace(9b287031-510c-410c-ade6-c2cf7a48e363): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 01 07:25:10 crc kubenswrapper[4835]: E0201 07:25:10.646519 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-tlf77" podUID="9b287031-510c-410c-ade6-c2cf7a48e363" Feb 01 07:25:10 crc kubenswrapper[4835]: I0201 07:25:10.980504 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2msm5"] Feb 01 07:25:10 crc kubenswrapper[4835]: W0201 07:25:10.991570 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcaf346fd_1c47_4f35_a5e6_79f7ac8fcafe.slice/crio-976a92bf8966e490f7cd8f9dc0d4b383083d4002d7be8d0e9e2bbb527c97e57e WatchSource:0}: Error finding container 976a92bf8966e490f7cd8f9dc0d4b383083d4002d7be8d0e9e2bbb527c97e57e: Status 404 returned error can't find the container with id 976a92bf8966e490f7cd8f9dc0d4b383083d4002d7be8d0e9e2bbb527c97e57e Feb 01 07:25:11 crc kubenswrapper[4835]: I0201 07:25:11.557488 4835 generic.go:334] "Generic (PLEG): container finished" podID="2e2bb332-ae2b-4ef7-90b2-79928bf7407b" containerID="deccbf5bf47273db8305d287368e84a9555304937b617c52aaad45a3c56162a2" exitCode=0 Feb 01 07:25:11 crc kubenswrapper[4835]: I0201 07:25:11.557722 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7hk7" event={"ID":"2e2bb332-ae2b-4ef7-90b2-79928bf7407b","Type":"ContainerDied","Data":"deccbf5bf47273db8305d287368e84a9555304937b617c52aaad45a3c56162a2"} Feb 01 07:25:11 crc kubenswrapper[4835]: I0201 07:25:11.564994 4835 generic.go:334] "Generic (PLEG): container finished" podID="cc8c2486-a383-48cb-aefe-1610bc1c534f" containerID="7dfe92877369cb97f3ec7447941cb4bb3ac1fbbf67088e96c4fce3815dd8e8dc" exitCode=0 Feb 01 07:25:11 crc kubenswrapper[4835]: I0201 07:25:11.565054 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5smh" event={"ID":"cc8c2486-a383-48cb-aefe-1610bc1c534f","Type":"ContainerDied","Data":"7dfe92877369cb97f3ec7447941cb4bb3ac1fbbf67088e96c4fce3815dd8e8dc"} Feb 01 07:25:11 crc kubenswrapper[4835]: I0201 07:25:11.572216 4835 generic.go:334] "Generic (PLEG): container finished" podID="f562492e-dbf9-440e-978a-603956fc464e" containerID="25fdb854cbe1bf7efd7e7f32850a0d48ca8d03934de27955c9c0311a3869e9eb" exitCode=0 Feb 01 07:25:11 crc kubenswrapper[4835]: I0201 07:25:11.572288 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7n8wh" event={"ID":"f562492e-dbf9-440e-978a-603956fc464e","Type":"ContainerDied","Data":"25fdb854cbe1bf7efd7e7f32850a0d48ca8d03934de27955c9c0311a3869e9eb"} Feb 01 07:25:11 crc kubenswrapper[4835]: I0201 07:25:11.580776 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2msm5" event={"ID":"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe","Type":"ContainerStarted","Data":"eb60a4d27a4cc1b5f3c82da369f83ac327916bf671ccae285fd0fb45c373ecf6"} Feb 01 07:25:11 crc kubenswrapper[4835]: I0201 07:25:11.580820 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2msm5" event={"ID":"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe","Type":"ContainerStarted","Data":"165e283bd22cdccea9ad0b40eddea97692652dff08663bcc215451372333ccca"} Feb 01 07:25:11 crc kubenswrapper[4835]: I0201 07:25:11.580833 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2msm5" event={"ID":"caf346fd-1c47-4f35-a5e6-79f7ac8fcafe","Type":"ContainerStarted","Data":"976a92bf8966e490f7cd8f9dc0d4b383083d4002d7be8d0e9e2bbb527c97e57e"} Feb 01 07:25:11 crc kubenswrapper[4835]: E0201 07:25:11.582697 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4xx49" podUID="602186bd-e71a-4ce1-ad39-c56495e815c3" Feb 01 07:25:11 crc kubenswrapper[4835]: E0201 07:25:11.582923 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-tlf77" podUID="9b287031-510c-410c-ade6-c2cf7a48e363" Feb 01 07:25:11 crc kubenswrapper[4835]: I0201 07:25:11.647319 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-2msm5" podStartSLOduration=159.647301693 podStartE2EDuration="2m39.647301693s" podCreationTimestamp="2026-02-01 07:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:25:11.645730442 +0000 UTC m=+184.766166906" watchObservedRunningTime="2026-02-01 07:25:11.647301693 +0000 UTC m=+184.767738127" Feb 01 07:25:12 crc kubenswrapper[4835]: I0201 07:25:12.596941 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7n8wh" event={"ID":"f562492e-dbf9-440e-978a-603956fc464e","Type":"ContainerStarted","Data":"5c8d88d803cbf808d4f6e7bbccdd22422fa76272b787ff433136b59f5dde80fe"} Feb 01 07:25:12 crc kubenswrapper[4835]: I0201 07:25:12.600969 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7hk7" event={"ID":"2e2bb332-ae2b-4ef7-90b2-79928bf7407b","Type":"ContainerStarted","Data":"9cd63e168f5ee1bba32762ea60b5535c14b22b6a31b98e3419ead8dd99d4331a"} Feb 01 07:25:12 crc kubenswrapper[4835]: I0201 07:25:12.604192 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5smh" event={"ID":"cc8c2486-a383-48cb-aefe-1610bc1c534f","Type":"ContainerStarted","Data":"c6d524ddca405a0b23f12afddf880a49b965b141dbf1843686ebe4bac83255ff"} Feb 01 07:25:12 crc kubenswrapper[4835]: I0201 07:25:12.620728 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7n8wh" podStartSLOduration=2.541320089 podStartE2EDuration="37.620706396s" podCreationTimestamp="2026-02-01 07:24:35 +0000 UTC" firstStartedPulling="2026-02-01 07:24:37.132177712 +0000 UTC m=+150.252614176" lastFinishedPulling="2026-02-01 07:25:12.211563999 +0000 UTC m=+185.332000483" observedRunningTime="2026-02-01 07:25:12.61860175 +0000 UTC m=+185.739038184" watchObservedRunningTime="2026-02-01 07:25:12.620706396 +0000 UTC m=+185.741142850" Feb 01 07:25:12 crc kubenswrapper[4835]: I0201 07:25:12.635728 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k5smh" podStartSLOduration=2.8461160039999998 podStartE2EDuration="34.635712561s" podCreationTimestamp="2026-02-01 07:24:38 +0000 UTC" firstStartedPulling="2026-02-01 07:24:40.227154716 +0000 UTC m=+153.347591150" lastFinishedPulling="2026-02-01 07:25:12.016751283 +0000 UTC m=+185.137187707" observedRunningTime="2026-02-01 07:25:12.633607336 +0000 UTC m=+185.754043780" watchObservedRunningTime="2026-02-01 07:25:12.635712561 +0000 UTC m=+185.756148995" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.587531 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s7hk7" podStartSLOduration=4.858576488 podStartE2EDuration="36.587509007s" podCreationTimestamp="2026-02-01 07:24:38 +0000 UTC" firstStartedPulling="2026-02-01 07:24:40.217554482 +0000 UTC m=+153.337990916" lastFinishedPulling="2026-02-01 07:25:11.946487011 +0000 UTC m=+185.066923435" observedRunningTime="2026-02-01 07:25:12.655129033 +0000 UTC m=+185.775565467" watchObservedRunningTime="2026-02-01 07:25:14.587509007 +0000 UTC m=+187.707945441" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.589636 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 01 07:25:14 crc kubenswrapper[4835]: E0201 07:25:14.589886 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e745045-e905-4988-b768-a0eac1b93996" containerName="pruner" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.589899 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e745045-e905-4988-b768-a0eac1b93996" containerName="pruner" Feb 01 07:25:14 crc kubenswrapper[4835]: E0201 07:25:14.589919 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d271f6d-4f2f-40e8-a928-4a88a2439f17" containerName="pruner" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.589928 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d271f6d-4f2f-40e8-a928-4a88a2439f17" containerName="pruner" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.590044 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e745045-e905-4988-b768-a0eac1b93996" containerName="pruner" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.590061 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d271f6d-4f2f-40e8-a928-4a88a2439f17" containerName="pruner" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.590481 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.594267 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.595137 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.598864 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.719660 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6187f01f-46de-413a-92cc-bc0f1375d41d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6187f01f-46de-413a-92cc-bc0f1375d41d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.720600 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6187f01f-46de-413a-92cc-bc0f1375d41d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6187f01f-46de-413a-92cc-bc0f1375d41d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.821861 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6187f01f-46de-413a-92cc-bc0f1375d41d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6187f01f-46de-413a-92cc-bc0f1375d41d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.821959 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6187f01f-46de-413a-92cc-bc0f1375d41d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6187f01f-46de-413a-92cc-bc0f1375d41d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.822027 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6187f01f-46de-413a-92cc-bc0f1375d41d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6187f01f-46de-413a-92cc-bc0f1375d41d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.841658 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6187f01f-46de-413a-92cc-bc0f1375d41d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6187f01f-46de-413a-92cc-bc0f1375d41d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 07:25:14 crc kubenswrapper[4835]: I0201 07:25:14.922440 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 07:25:15 crc kubenswrapper[4835]: I0201 07:25:15.307349 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 01 07:25:15 crc kubenswrapper[4835]: I0201 07:25:15.624670 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6187f01f-46de-413a-92cc-bc0f1375d41d","Type":"ContainerStarted","Data":"c31d81fc5c0cd09625c38c8ab4a7ca33f56c0b53ab94891b3e9f920ec9a6b7b8"} Feb 01 07:25:15 crc kubenswrapper[4835]: I0201 07:25:15.705689 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 01 07:25:16 crc kubenswrapper[4835]: I0201 07:25:16.115995 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:25:16 crc kubenswrapper[4835]: I0201 07:25:16.116033 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:25:16 crc kubenswrapper[4835]: I0201 07:25:16.271185 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:25:16 crc kubenswrapper[4835]: I0201 07:25:16.632567 4835 generic.go:334] "Generic (PLEG): container finished" podID="6187f01f-46de-413a-92cc-bc0f1375d41d" containerID="685e3c665dd59964bd38d414001e963309cce90e76dab346d0a1e34c8e97e399" exitCode=0 Feb 01 07:25:16 crc kubenswrapper[4835]: I0201 07:25:16.632620 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6187f01f-46de-413a-92cc-bc0f1375d41d","Type":"ContainerDied","Data":"685e3c665dd59964bd38d414001e963309cce90e76dab346d0a1e34c8e97e399"} Feb 01 07:25:17 crc kubenswrapper[4835]: I0201 07:25:17.552170 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tkff4"] Feb 01 07:25:18 crc kubenswrapper[4835]: I0201 07:25:18.002044 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 07:25:18 crc kubenswrapper[4835]: I0201 07:25:18.161768 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6187f01f-46de-413a-92cc-bc0f1375d41d-kubelet-dir\") pod \"6187f01f-46de-413a-92cc-bc0f1375d41d\" (UID: \"6187f01f-46de-413a-92cc-bc0f1375d41d\") " Feb 01 07:25:18 crc kubenswrapper[4835]: I0201 07:25:18.161830 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6187f01f-46de-413a-92cc-bc0f1375d41d-kube-api-access\") pod \"6187f01f-46de-413a-92cc-bc0f1375d41d\" (UID: \"6187f01f-46de-413a-92cc-bc0f1375d41d\") " Feb 01 07:25:18 crc kubenswrapper[4835]: I0201 07:25:18.161884 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6187f01f-46de-413a-92cc-bc0f1375d41d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6187f01f-46de-413a-92cc-bc0f1375d41d" (UID: "6187f01f-46de-413a-92cc-bc0f1375d41d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:25:18 crc kubenswrapper[4835]: I0201 07:25:18.162061 4835 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6187f01f-46de-413a-92cc-bc0f1375d41d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:18 crc kubenswrapper[4835]: I0201 07:25:18.169552 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6187f01f-46de-413a-92cc-bc0f1375d41d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6187f01f-46de-413a-92cc-bc0f1375d41d" (UID: "6187f01f-46de-413a-92cc-bc0f1375d41d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:25:18 crc kubenswrapper[4835]: I0201 07:25:18.263553 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6187f01f-46de-413a-92cc-bc0f1375d41d-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:18 crc kubenswrapper[4835]: I0201 07:25:18.646592 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6187f01f-46de-413a-92cc-bc0f1375d41d","Type":"ContainerDied","Data":"c31d81fc5c0cd09625c38c8ab4a7ca33f56c0b53ab94891b3e9f920ec9a6b7b8"} Feb 01 07:25:18 crc kubenswrapper[4835]: I0201 07:25:18.646648 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c31d81fc5c0cd09625c38c8ab4a7ca33f56c0b53ab94891b3e9f920ec9a6b7b8" Feb 01 07:25:18 crc kubenswrapper[4835]: I0201 07:25:18.646678 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 01 07:25:18 crc kubenswrapper[4835]: I0201 07:25:18.907578 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:25:18 crc kubenswrapper[4835]: I0201 07:25:18.907779 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:25:19 crc kubenswrapper[4835]: I0201 07:25:19.372939 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:25:19 crc kubenswrapper[4835]: I0201 07:25:19.372993 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:25:19 crc kubenswrapper[4835]: I0201 07:25:19.424037 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:25:19 crc kubenswrapper[4835]: I0201 07:25:19.696132 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:25:19 crc kubenswrapper[4835]: I0201 07:25:19.735702 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k5smh"] Feb 01 07:25:19 crc kubenswrapper[4835]: I0201 07:25:19.968694 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s7hk7" podUID="2e2bb332-ae2b-4ef7-90b2-79928bf7407b" containerName="registry-server" probeResult="failure" output=< Feb 01 07:25:19 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Feb 01 07:25:19 crc kubenswrapper[4835]: > Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.384182 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 01 07:25:21 crc kubenswrapper[4835]: E0201 07:25:21.384814 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6187f01f-46de-413a-92cc-bc0f1375d41d" containerName="pruner" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.384841 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6187f01f-46de-413a-92cc-bc0f1375d41d" containerName="pruner" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.384932 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="6187f01f-46de-413a-92cc-bc0f1375d41d" containerName="pruner" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.385398 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.387868 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.388147 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.389077 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.513501 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c9b454b8-f758-43d4-bd2b-93ebc807e06e-var-lock\") pod \"installer-9-crc\" (UID: \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.513550 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9b454b8-f758-43d4-bd2b-93ebc807e06e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.513748 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9b454b8-f758-43d4-bd2b-93ebc807e06e-kube-api-access\") pod \"installer-9-crc\" (UID: \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.614811 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9b454b8-f758-43d4-bd2b-93ebc807e06e-kube-api-access\") pod \"installer-9-crc\" (UID: \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.614873 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c9b454b8-f758-43d4-bd2b-93ebc807e06e-var-lock\") pod \"installer-9-crc\" (UID: \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.614896 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9b454b8-f758-43d4-bd2b-93ebc807e06e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.614951 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9b454b8-f758-43d4-bd2b-93ebc807e06e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.615018 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c9b454b8-f758-43d4-bd2b-93ebc807e06e-var-lock\") pod \"installer-9-crc\" (UID: \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.640324 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9b454b8-f758-43d4-bd2b-93ebc807e06e-kube-api-access\") pod \"installer-9-crc\" (UID: \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.660890 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k5smh" podUID="cc8c2486-a383-48cb-aefe-1610bc1c534f" containerName="registry-server" containerID="cri-o://c6d524ddca405a0b23f12afddf880a49b965b141dbf1843686ebe4bac83255ff" gracePeriod=2 Feb 01 07:25:21 crc kubenswrapper[4835]: I0201 07:25:21.709075 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 01 07:25:22 crc kubenswrapper[4835]: I0201 07:25:22.115848 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 01 07:25:22 crc kubenswrapper[4835]: I0201 07:25:22.666558 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c9b454b8-f758-43d4-bd2b-93ebc807e06e","Type":"ContainerStarted","Data":"2997edb8ab02ca7e2da0f4120bdf3140a5e44974f1c0d9270cf560bcceec34c4"} Feb 01 07:25:22 crc kubenswrapper[4835]: I0201 07:25:22.669022 4835 generic.go:334] "Generic (PLEG): container finished" podID="cc8c2486-a383-48cb-aefe-1610bc1c534f" containerID="c6d524ddca405a0b23f12afddf880a49b965b141dbf1843686ebe4bac83255ff" exitCode=0 Feb 01 07:25:22 crc kubenswrapper[4835]: I0201 07:25:22.669076 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5smh" event={"ID":"cc8c2486-a383-48cb-aefe-1610bc1c534f","Type":"ContainerDied","Data":"c6d524ddca405a0b23f12afddf880a49b965b141dbf1843686ebe4bac83255ff"} Feb 01 07:25:23 crc kubenswrapper[4835]: I0201 07:25:23.391570 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:25:23 crc kubenswrapper[4835]: I0201 07:25:23.544080 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc8c2486-a383-48cb-aefe-1610bc1c534f-utilities\") pod \"cc8c2486-a383-48cb-aefe-1610bc1c534f\" (UID: \"cc8c2486-a383-48cb-aefe-1610bc1c534f\") " Feb 01 07:25:23 crc kubenswrapper[4835]: I0201 07:25:23.544154 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bggh7\" (UniqueName: \"kubernetes.io/projected/cc8c2486-a383-48cb-aefe-1610bc1c534f-kube-api-access-bggh7\") pod \"cc8c2486-a383-48cb-aefe-1610bc1c534f\" (UID: \"cc8c2486-a383-48cb-aefe-1610bc1c534f\") " Feb 01 07:25:23 crc kubenswrapper[4835]: I0201 07:25:23.544176 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc8c2486-a383-48cb-aefe-1610bc1c534f-catalog-content\") pod \"cc8c2486-a383-48cb-aefe-1610bc1c534f\" (UID: \"cc8c2486-a383-48cb-aefe-1610bc1c534f\") " Feb 01 07:25:23 crc kubenswrapper[4835]: I0201 07:25:23.544899 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc8c2486-a383-48cb-aefe-1610bc1c534f-utilities" (OuterVolumeSpecName: "utilities") pod "cc8c2486-a383-48cb-aefe-1610bc1c534f" (UID: "cc8c2486-a383-48cb-aefe-1610bc1c534f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:25:23 crc kubenswrapper[4835]: I0201 07:25:23.548440 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc8c2486-a383-48cb-aefe-1610bc1c534f-kube-api-access-bggh7" (OuterVolumeSpecName: "kube-api-access-bggh7") pod "cc8c2486-a383-48cb-aefe-1610bc1c534f" (UID: "cc8c2486-a383-48cb-aefe-1610bc1c534f"). InnerVolumeSpecName "kube-api-access-bggh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:25:23 crc kubenswrapper[4835]: I0201 07:25:23.645660 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bggh7\" (UniqueName: \"kubernetes.io/projected/cc8c2486-a383-48cb-aefe-1610bc1c534f-kube-api-access-bggh7\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:23 crc kubenswrapper[4835]: I0201 07:25:23.645984 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc8c2486-a383-48cb-aefe-1610bc1c534f-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:23 crc kubenswrapper[4835]: I0201 07:25:23.675991 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5smh" event={"ID":"cc8c2486-a383-48cb-aefe-1610bc1c534f","Type":"ContainerDied","Data":"60136c9d9c1fa01ab239559ff4cf41446038fd0cd99c254158238a21609db4a7"} Feb 01 07:25:23 crc kubenswrapper[4835]: I0201 07:25:23.676026 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k5smh" Feb 01 07:25:23 crc kubenswrapper[4835]: I0201 07:25:23.676043 4835 scope.go:117] "RemoveContainer" containerID="c6d524ddca405a0b23f12afddf880a49b965b141dbf1843686ebe4bac83255ff" Feb 01 07:25:23 crc kubenswrapper[4835]: I0201 07:25:23.691671 4835 scope.go:117] "RemoveContainer" containerID="7dfe92877369cb97f3ec7447941cb4bb3ac1fbbf67088e96c4fce3815dd8e8dc" Feb 01 07:25:23 crc kubenswrapper[4835]: I0201 07:25:23.711431 4835 scope.go:117] "RemoveContainer" containerID="22e3a2a64402097b404fc7d0b7e471cb7339456b1827cdc5eeb1a1b4417b2cf4" Feb 01 07:25:24 crc kubenswrapper[4835]: I0201 07:25:24.681882 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c9b454b8-f758-43d4-bd2b-93ebc807e06e","Type":"ContainerStarted","Data":"3a62854f07efe9ee61bbc8b6cf4f08d0ff0e9a200d15a47492c6bdf618532148"} Feb 01 07:25:24 crc kubenswrapper[4835]: I0201 07:25:24.696250 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.69622913 podStartE2EDuration="3.69622913s" podCreationTimestamp="2026-02-01 07:25:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:25:24.694554781 +0000 UTC m=+197.814991215" watchObservedRunningTime="2026-02-01 07:25:24.69622913 +0000 UTC m=+197.816665574" Feb 01 07:25:25 crc kubenswrapper[4835]: I0201 07:25:25.191577 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:25:25 crc kubenswrapper[4835]: I0201 07:25:25.191667 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:25:25 crc kubenswrapper[4835]: I0201 07:25:25.761298 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc8c2486-a383-48cb-aefe-1610bc1c534f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc8c2486-a383-48cb-aefe-1610bc1c534f" (UID: "cc8c2486-a383-48cb-aefe-1610bc1c534f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:25:25 crc kubenswrapper[4835]: I0201 07:25:25.772488 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc8c2486-a383-48cb-aefe-1610bc1c534f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:25 crc kubenswrapper[4835]: I0201 07:25:25.804690 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k5smh"] Feb 01 07:25:25 crc kubenswrapper[4835]: I0201 07:25:25.807613 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k5smh"] Feb 01 07:25:26 crc kubenswrapper[4835]: I0201 07:25:26.155371 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:25:27 crc kubenswrapper[4835]: I0201 07:25:27.572303 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc8c2486-a383-48cb-aefe-1610bc1c534f" path="/var/lib/kubelet/pods/cc8c2486-a383-48cb-aefe-1610bc1c534f/volumes" Feb 01 07:25:28 crc kubenswrapper[4835]: I0201 07:25:28.462229 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7n8wh"] Feb 01 07:25:28 crc kubenswrapper[4835]: I0201 07:25:28.462900 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7n8wh" podUID="f562492e-dbf9-440e-978a-603956fc464e" containerName="registry-server" containerID="cri-o://5c8d88d803cbf808d4f6e7bbccdd22422fa76272b787ff433136b59f5dde80fe" gracePeriod=2 Feb 01 07:25:28 crc kubenswrapper[4835]: I0201 07:25:28.702818 4835 generic.go:334] "Generic (PLEG): container finished" podID="f562492e-dbf9-440e-978a-603956fc464e" containerID="5c8d88d803cbf808d4f6e7bbccdd22422fa76272b787ff433136b59f5dde80fe" exitCode=0 Feb 01 07:25:28 crc kubenswrapper[4835]: I0201 07:25:28.702902 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7n8wh" event={"ID":"f562492e-dbf9-440e-978a-603956fc464e","Type":"ContainerDied","Data":"5c8d88d803cbf808d4f6e7bbccdd22422fa76272b787ff433136b59f5dde80fe"} Feb 01 07:25:28 crc kubenswrapper[4835]: I0201 07:25:28.703941 4835 generic.go:334] "Generic (PLEG): container finished" podID="9b287031-510c-410c-ade6-c2cf7a48e363" containerID="5c5372c0af7c9bf826f121a7fb0023e19998440e44d914c7ef5d02b3764dbbbd" exitCode=0 Feb 01 07:25:28 crc kubenswrapper[4835]: I0201 07:25:28.703984 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlf77" event={"ID":"9b287031-510c-410c-ade6-c2cf7a48e363","Type":"ContainerDied","Data":"5c5372c0af7c9bf826f121a7fb0023e19998440e44d914c7ef5d02b3764dbbbd"} Feb 01 07:25:28 crc kubenswrapper[4835]: I0201 07:25:28.707216 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ng2z7" event={"ID":"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e","Type":"ContainerStarted","Data":"4681fe702415970b6f8861404d16e411c78d24a0c2a4df5cc56dd2a62ba6c02b"} Feb 01 07:25:28 crc kubenswrapper[4835]: I0201 07:25:28.711127 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zbfbl" event={"ID":"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e","Type":"ContainerStarted","Data":"7fde970c7809bb8c50b149f97b8907cd34e5ed3f92e53b3f48046bec959d09ef"} Feb 01 07:25:28 crc kubenswrapper[4835]: I0201 07:25:28.712436 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t677t" event={"ID":"835b2622-9047-4e3a-b019-6f15c5fd4566","Type":"ContainerStarted","Data":"1b7f8d984d304fa16176f9ff67b5f5c30b1244ad6e8dd4e1ef20f9098a0f7fe2"} Feb 01 07:25:28 crc kubenswrapper[4835]: I0201 07:25:28.956497 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:25:28 crc kubenswrapper[4835]: I0201 07:25:28.993708 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.015818 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.112845 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7jhx\" (UniqueName: \"kubernetes.io/projected/f562492e-dbf9-440e-978a-603956fc464e-kube-api-access-r7jhx\") pod \"f562492e-dbf9-440e-978a-603956fc464e\" (UID: \"f562492e-dbf9-440e-978a-603956fc464e\") " Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.112953 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f562492e-dbf9-440e-978a-603956fc464e-utilities\") pod \"f562492e-dbf9-440e-978a-603956fc464e\" (UID: \"f562492e-dbf9-440e-978a-603956fc464e\") " Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.112996 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f562492e-dbf9-440e-978a-603956fc464e-catalog-content\") pod \"f562492e-dbf9-440e-978a-603956fc464e\" (UID: \"f562492e-dbf9-440e-978a-603956fc464e\") " Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.113773 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f562492e-dbf9-440e-978a-603956fc464e-utilities" (OuterVolumeSpecName: "utilities") pod "f562492e-dbf9-440e-978a-603956fc464e" (UID: "f562492e-dbf9-440e-978a-603956fc464e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.118123 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f562492e-dbf9-440e-978a-603956fc464e-kube-api-access-r7jhx" (OuterVolumeSpecName: "kube-api-access-r7jhx") pod "f562492e-dbf9-440e-978a-603956fc464e" (UID: "f562492e-dbf9-440e-978a-603956fc464e"). InnerVolumeSpecName "kube-api-access-r7jhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.210148 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f562492e-dbf9-440e-978a-603956fc464e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f562492e-dbf9-440e-978a-603956fc464e" (UID: "f562492e-dbf9-440e-978a-603956fc464e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.213889 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7jhx\" (UniqueName: \"kubernetes.io/projected/f562492e-dbf9-440e-978a-603956fc464e-kube-api-access-r7jhx\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.213919 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f562492e-dbf9-440e-978a-603956fc464e-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.213930 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f562492e-dbf9-440e-978a-603956fc464e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:29 crc kubenswrapper[4835]: E0201 07:25:29.286465 4835 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod602186bd_e71a_4ce1_ad39_c56495e815c3.slice/crio-b1f0e4a7c799308902bb8e0217a0c30fdd02e1a32fd2564302d2a528cea8ba75.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod602186bd_e71a_4ce1_ad39_c56495e815c3.slice/crio-conmon-b1f0e4a7c799308902bb8e0217a0c30fdd02e1a32fd2564302d2a528cea8ba75.scope\": RecentStats: unable to find data in memory cache]" Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.720386 4835 generic.go:334] "Generic (PLEG): container finished" podID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" containerID="7fde970c7809bb8c50b149f97b8907cd34e5ed3f92e53b3f48046bec959d09ef" exitCode=0 Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.720479 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zbfbl" event={"ID":"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e","Type":"ContainerDied","Data":"7fde970c7809bb8c50b149f97b8907cd34e5ed3f92e53b3f48046bec959d09ef"} Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.729802 4835 generic.go:334] "Generic (PLEG): container finished" podID="602186bd-e71a-4ce1-ad39-c56495e815c3" containerID="b1f0e4a7c799308902bb8e0217a0c30fdd02e1a32fd2564302d2a528cea8ba75" exitCode=0 Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.729833 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xx49" event={"ID":"602186bd-e71a-4ce1-ad39-c56495e815c3","Type":"ContainerDied","Data":"b1f0e4a7c799308902bb8e0217a0c30fdd02e1a32fd2564302d2a528cea8ba75"} Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.732107 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t677t" event={"ID":"835b2622-9047-4e3a-b019-6f15c5fd4566","Type":"ContainerDied","Data":"1b7f8d984d304fa16176f9ff67b5f5c30b1244ad6e8dd4e1ef20f9098a0f7fe2"} Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.732109 4835 generic.go:334] "Generic (PLEG): container finished" podID="835b2622-9047-4e3a-b019-6f15c5fd4566" containerID="1b7f8d984d304fa16176f9ff67b5f5c30b1244ad6e8dd4e1ef20f9098a0f7fe2" exitCode=0 Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.735321 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7n8wh" event={"ID":"f562492e-dbf9-440e-978a-603956fc464e","Type":"ContainerDied","Data":"86a332e6785f7fd31c68a8369c40ba5c5a557e81b2b71995f91b8e3ba6b2e274"} Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.735363 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7n8wh" Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.735373 4835 scope.go:117] "RemoveContainer" containerID="5c8d88d803cbf808d4f6e7bbccdd22422fa76272b787ff433136b59f5dde80fe" Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.740326 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlf77" event={"ID":"9b287031-510c-410c-ade6-c2cf7a48e363","Type":"ContainerStarted","Data":"88dbcbce0ef58fe86692727ce37088f800bb38d21d5cc849ba2028f877e33b33"} Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.742634 4835 generic.go:334] "Generic (PLEG): container finished" podID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" containerID="4681fe702415970b6f8861404d16e411c78d24a0c2a4df5cc56dd2a62ba6c02b" exitCode=0 Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.742855 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ng2z7" event={"ID":"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e","Type":"ContainerDied","Data":"4681fe702415970b6f8861404d16e411c78d24a0c2a4df5cc56dd2a62ba6c02b"} Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.757797 4835 scope.go:117] "RemoveContainer" containerID="25fdb854cbe1bf7efd7e7f32850a0d48ca8d03934de27955c9c0311a3869e9eb" Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.787162 4835 scope.go:117] "RemoveContainer" containerID="c6c784d52b5c200fbc9c5b7fd427e7a9a01fe58abdfbe2cd4a7fa8dbd1de744a" Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.792256 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tlf77" podStartSLOduration=2.8687939399999998 podStartE2EDuration="52.792233351s" podCreationTimestamp="2026-02-01 07:24:37 +0000 UTC" firstStartedPulling="2026-02-01 07:24:39.172335847 +0000 UTC m=+152.292772281" lastFinishedPulling="2026-02-01 07:25:29.095775268 +0000 UTC m=+202.216211692" observedRunningTime="2026-02-01 07:25:29.788112632 +0000 UTC m=+202.908549066" watchObservedRunningTime="2026-02-01 07:25:29.792233351 +0000 UTC m=+202.912669795" Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.805114 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7n8wh"] Feb 01 07:25:29 crc kubenswrapper[4835]: I0201 07:25:29.807941 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7n8wh"] Feb 01 07:25:30 crc kubenswrapper[4835]: I0201 07:25:30.748842 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t677t" event={"ID":"835b2622-9047-4e3a-b019-6f15c5fd4566","Type":"ContainerStarted","Data":"e8d75b9cdb3185ff37877ed85d6d3372730274f7dbff223d7ea5c84fe296a601"} Feb 01 07:25:30 crc kubenswrapper[4835]: I0201 07:25:30.752321 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ng2z7" event={"ID":"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e","Type":"ContainerStarted","Data":"78dc9a1b6712446970724b9eff01b0ce34b3278eaad17f0dd07e8017c2399297"} Feb 01 07:25:30 crc kubenswrapper[4835]: I0201 07:25:30.754254 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zbfbl" event={"ID":"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e","Type":"ContainerStarted","Data":"0eea26ae4bb5a1954f72fbcc75d1e7903480a69577a36065cf6a4254e3efba68"} Feb 01 07:25:30 crc kubenswrapper[4835]: I0201 07:25:30.756164 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xx49" event={"ID":"602186bd-e71a-4ce1-ad39-c56495e815c3","Type":"ContainerStarted","Data":"9eb022e2135b0596e33429e62d1e55cd8a0be16a9faa993cffd3947dfd050b0a"} Feb 01 07:25:30 crc kubenswrapper[4835]: I0201 07:25:30.773340 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t677t" podStartSLOduration=2.7895730629999997 podStartE2EDuration="55.773323101s" podCreationTimestamp="2026-02-01 07:24:35 +0000 UTC" firstStartedPulling="2026-02-01 07:24:37.140751248 +0000 UTC m=+150.261187702" lastFinishedPulling="2026-02-01 07:25:30.124501296 +0000 UTC m=+203.244937740" observedRunningTime="2026-02-01 07:25:30.771431006 +0000 UTC m=+203.891867440" watchObservedRunningTime="2026-02-01 07:25:30.773323101 +0000 UTC m=+203.893759535" Feb 01 07:25:30 crc kubenswrapper[4835]: I0201 07:25:30.791708 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ng2z7" podStartSLOduration=2.680829851 podStartE2EDuration="55.791684532s" podCreationTimestamp="2026-02-01 07:24:35 +0000 UTC" firstStartedPulling="2026-02-01 07:24:37.134980436 +0000 UTC m=+150.255416870" lastFinishedPulling="2026-02-01 07:25:30.245835107 +0000 UTC m=+203.366271551" observedRunningTime="2026-02-01 07:25:30.789437627 +0000 UTC m=+203.909874081" watchObservedRunningTime="2026-02-01 07:25:30.791684532 +0000 UTC m=+203.912120966" Feb 01 07:25:30 crc kubenswrapper[4835]: I0201 07:25:30.813228 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zbfbl" podStartSLOduration=2.740649783 podStartE2EDuration="55.813211255s" podCreationTimestamp="2026-02-01 07:24:35 +0000 UTC" firstStartedPulling="2026-02-01 07:24:37.138438157 +0000 UTC m=+150.258874591" lastFinishedPulling="2026-02-01 07:25:30.210999629 +0000 UTC m=+203.331436063" observedRunningTime="2026-02-01 07:25:30.8123585 +0000 UTC m=+203.932794934" watchObservedRunningTime="2026-02-01 07:25:30.813211255 +0000 UTC m=+203.933647689" Feb 01 07:25:30 crc kubenswrapper[4835]: I0201 07:25:30.836272 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4xx49" podStartSLOduration=2.823414578 podStartE2EDuration="53.836256862s" podCreationTimestamp="2026-02-01 07:24:37 +0000 UTC" firstStartedPulling="2026-02-01 07:24:39.17206315 +0000 UTC m=+152.292499584" lastFinishedPulling="2026-02-01 07:25:30.184905434 +0000 UTC m=+203.305341868" observedRunningTime="2026-02-01 07:25:30.834827001 +0000 UTC m=+203.955263435" watchObservedRunningTime="2026-02-01 07:25:30.836256862 +0000 UTC m=+203.956693296" Feb 01 07:25:31 crc kubenswrapper[4835]: I0201 07:25:31.574093 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f562492e-dbf9-440e-978a-603956fc464e" path="/var/lib/kubelet/pods/f562492e-dbf9-440e-978a-603956fc464e/volumes" Feb 01 07:25:35 crc kubenswrapper[4835]: I0201 07:25:35.692652 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t677t" Feb 01 07:25:35 crc kubenswrapper[4835]: I0201 07:25:35.693634 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t677t" Feb 01 07:25:35 crc kubenswrapper[4835]: I0201 07:25:35.764114 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t677t" Feb 01 07:25:35 crc kubenswrapper[4835]: I0201 07:25:35.842524 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t677t" Feb 01 07:25:35 crc kubenswrapper[4835]: I0201 07:25:35.901546 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:25:35 crc kubenswrapper[4835]: I0201 07:25:35.901611 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:25:35 crc kubenswrapper[4835]: I0201 07:25:35.951721 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:25:36 crc kubenswrapper[4835]: I0201 07:25:36.315583 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:25:36 crc kubenswrapper[4835]: I0201 07:25:36.315627 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:25:36 crc kubenswrapper[4835]: I0201 07:25:36.393139 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:25:36 crc kubenswrapper[4835]: I0201 07:25:36.870202 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:25:36 crc kubenswrapper[4835]: I0201 07:25:36.870646 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:25:37 crc kubenswrapper[4835]: I0201 07:25:37.888588 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:25:37 crc kubenswrapper[4835]: I0201 07:25:37.891807 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:25:37 crc kubenswrapper[4835]: I0201 07:25:37.929561 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:25:38 crc kubenswrapper[4835]: I0201 07:25:38.318469 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:25:38 crc kubenswrapper[4835]: I0201 07:25:38.318914 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:25:38 crc kubenswrapper[4835]: I0201 07:25:38.384463 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:25:38 crc kubenswrapper[4835]: I0201 07:25:38.658273 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ng2z7"] Feb 01 07:25:38 crc kubenswrapper[4835]: I0201 07:25:38.811070 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ng2z7" podUID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" containerName="registry-server" containerID="cri-o://78dc9a1b6712446970724b9eff01b0ce34b3278eaad17f0dd07e8017c2399297" gracePeriod=2 Feb 01 07:25:38 crc kubenswrapper[4835]: I0201 07:25:38.881103 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:25:38 crc kubenswrapper[4835]: I0201 07:25:38.882475 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.272190 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.454404 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-utilities\") pod \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\" (UID: \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\") " Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.454815 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-catalog-content\") pod \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\" (UID: \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\") " Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.454847 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c7w5\" (UniqueName: \"kubernetes.io/projected/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-kube-api-access-5c7w5\") pod \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\" (UID: \"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e\") " Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.456531 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-utilities" (OuterVolumeSpecName: "utilities") pod "e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" (UID: "e3a136e2-3caa-4ed0-960a-6b6a0fdef39e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.462629 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-kube-api-access-5c7w5" (OuterVolumeSpecName: "kube-api-access-5c7w5") pod "e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" (UID: "e3a136e2-3caa-4ed0-960a-6b6a0fdef39e"). InnerVolumeSpecName "kube-api-access-5c7w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.542480 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" (UID: "e3a136e2-3caa-4ed0-960a-6b6a0fdef39e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.556521 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.556584 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.556601 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c7w5\" (UniqueName: \"kubernetes.io/projected/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e-kube-api-access-5c7w5\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.820829 4835 generic.go:334] "Generic (PLEG): container finished" podID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" containerID="78dc9a1b6712446970724b9eff01b0ce34b3278eaad17f0dd07e8017c2399297" exitCode=0 Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.820902 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ng2z7" event={"ID":"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e","Type":"ContainerDied","Data":"78dc9a1b6712446970724b9eff01b0ce34b3278eaad17f0dd07e8017c2399297"} Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.820931 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ng2z7" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.820952 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ng2z7" event={"ID":"e3a136e2-3caa-4ed0-960a-6b6a0fdef39e","Type":"ContainerDied","Data":"c56ac053edf9fdbd97a44ab1c01dec3b54c9bd91c581423e5a21d7786e48591e"} Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.820978 4835 scope.go:117] "RemoveContainer" containerID="78dc9a1b6712446970724b9eff01b0ce34b3278eaad17f0dd07e8017c2399297" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.844983 4835 scope.go:117] "RemoveContainer" containerID="4681fe702415970b6f8861404d16e411c78d24a0c2a4df5cc56dd2a62ba6c02b" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.855008 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ng2z7"] Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.866717 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ng2z7"] Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.874472 4835 scope.go:117] "RemoveContainer" containerID="a6b8f48d9df6c1d8f0734a3ca0cfbfd4aeefeefe31ab96acc4f52f2976e7751f" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.908756 4835 scope.go:117] "RemoveContainer" containerID="78dc9a1b6712446970724b9eff01b0ce34b3278eaad17f0dd07e8017c2399297" Feb 01 07:25:39 crc kubenswrapper[4835]: E0201 07:25:39.909359 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78dc9a1b6712446970724b9eff01b0ce34b3278eaad17f0dd07e8017c2399297\": container with ID starting with 78dc9a1b6712446970724b9eff01b0ce34b3278eaad17f0dd07e8017c2399297 not found: ID does not exist" containerID="78dc9a1b6712446970724b9eff01b0ce34b3278eaad17f0dd07e8017c2399297" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.909398 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78dc9a1b6712446970724b9eff01b0ce34b3278eaad17f0dd07e8017c2399297"} err="failed to get container status \"78dc9a1b6712446970724b9eff01b0ce34b3278eaad17f0dd07e8017c2399297\": rpc error: code = NotFound desc = could not find container \"78dc9a1b6712446970724b9eff01b0ce34b3278eaad17f0dd07e8017c2399297\": container with ID starting with 78dc9a1b6712446970724b9eff01b0ce34b3278eaad17f0dd07e8017c2399297 not found: ID does not exist" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.909574 4835 scope.go:117] "RemoveContainer" containerID="4681fe702415970b6f8861404d16e411c78d24a0c2a4df5cc56dd2a62ba6c02b" Feb 01 07:25:39 crc kubenswrapper[4835]: E0201 07:25:39.910455 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4681fe702415970b6f8861404d16e411c78d24a0c2a4df5cc56dd2a62ba6c02b\": container with ID starting with 4681fe702415970b6f8861404d16e411c78d24a0c2a4df5cc56dd2a62ba6c02b not found: ID does not exist" containerID="4681fe702415970b6f8861404d16e411c78d24a0c2a4df5cc56dd2a62ba6c02b" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.910534 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4681fe702415970b6f8861404d16e411c78d24a0c2a4df5cc56dd2a62ba6c02b"} err="failed to get container status \"4681fe702415970b6f8861404d16e411c78d24a0c2a4df5cc56dd2a62ba6c02b\": rpc error: code = NotFound desc = could not find container \"4681fe702415970b6f8861404d16e411c78d24a0c2a4df5cc56dd2a62ba6c02b\": container with ID starting with 4681fe702415970b6f8861404d16e411c78d24a0c2a4df5cc56dd2a62ba6c02b not found: ID does not exist" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.910586 4835 scope.go:117] "RemoveContainer" containerID="a6b8f48d9df6c1d8f0734a3ca0cfbfd4aeefeefe31ab96acc4f52f2976e7751f" Feb 01 07:25:39 crc kubenswrapper[4835]: E0201 07:25:39.911079 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6b8f48d9df6c1d8f0734a3ca0cfbfd4aeefeefe31ab96acc4f52f2976e7751f\": container with ID starting with a6b8f48d9df6c1d8f0734a3ca0cfbfd4aeefeefe31ab96acc4f52f2976e7751f not found: ID does not exist" containerID="a6b8f48d9df6c1d8f0734a3ca0cfbfd4aeefeefe31ab96acc4f52f2976e7751f" Feb 01 07:25:39 crc kubenswrapper[4835]: I0201 07:25:39.911113 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b8f48d9df6c1d8f0734a3ca0cfbfd4aeefeefe31ab96acc4f52f2976e7751f"} err="failed to get container status \"a6b8f48d9df6c1d8f0734a3ca0cfbfd4aeefeefe31ab96acc4f52f2976e7751f\": rpc error: code = NotFound desc = could not find container \"a6b8f48d9df6c1d8f0734a3ca0cfbfd4aeefeefe31ab96acc4f52f2976e7751f\": container with ID starting with a6b8f48d9df6c1d8f0734a3ca0cfbfd4aeefeefe31ab96acc4f52f2976e7751f not found: ID does not exist" Feb 01 07:25:41 crc kubenswrapper[4835]: I0201 07:25:41.058812 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlf77"] Feb 01 07:25:41 crc kubenswrapper[4835]: I0201 07:25:41.577770 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" path="/var/lib/kubelet/pods/e3a136e2-3caa-4ed0-960a-6b6a0fdef39e/volumes" Feb 01 07:25:41 crc kubenswrapper[4835]: I0201 07:25:41.837399 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tlf77" podUID="9b287031-510c-410c-ade6-c2cf7a48e363" containerName="registry-server" containerID="cri-o://88dbcbce0ef58fe86692727ce37088f800bb38d21d5cc849ba2028f877e33b33" gracePeriod=2 Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.588094 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" podUID="62724c3f-5c92-4e77-ba3a-0f6b7215f48a" containerName="oauth-openshift" containerID="cri-o://3ce1b71be758dd076de182606cb238305ec470a936ab71da41c867e65c4d55e4" gracePeriod=15 Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.772126 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.846332 4835 generic.go:334] "Generic (PLEG): container finished" podID="9b287031-510c-410c-ade6-c2cf7a48e363" containerID="88dbcbce0ef58fe86692727ce37088f800bb38d21d5cc849ba2028f877e33b33" exitCode=0 Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.846437 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tlf77" Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.846443 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlf77" event={"ID":"9b287031-510c-410c-ade6-c2cf7a48e363","Type":"ContainerDied","Data":"88dbcbce0ef58fe86692727ce37088f800bb38d21d5cc849ba2028f877e33b33"} Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.846558 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tlf77" event={"ID":"9b287031-510c-410c-ade6-c2cf7a48e363","Type":"ContainerDied","Data":"50bb18dda4afd99c54bbc442fbcd2bb9c50ee2eb6dac4877186bb6aa56a4b49b"} Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.846594 4835 scope.go:117] "RemoveContainer" containerID="88dbcbce0ef58fe86692727ce37088f800bb38d21d5cc849ba2028f877e33b33" Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.850017 4835 generic.go:334] "Generic (PLEG): container finished" podID="62724c3f-5c92-4e77-ba3a-0f6b7215f48a" containerID="3ce1b71be758dd076de182606cb238305ec470a936ab71da41c867e65c4d55e4" exitCode=0 Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.850111 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" event={"ID":"62724c3f-5c92-4e77-ba3a-0f6b7215f48a","Type":"ContainerDied","Data":"3ce1b71be758dd076de182606cb238305ec470a936ab71da41c867e65c4d55e4"} Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.873985 4835 scope.go:117] "RemoveContainer" containerID="5c5372c0af7c9bf826f121a7fb0023e19998440e44d914c7ef5d02b3764dbbbd" Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.893776 4835 scope.go:117] "RemoveContainer" containerID="3e7152183a0a34ef6c3548c8ea64fd3446214efac3b2ff0829cdbc79609fea6f" Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.904158 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b287031-510c-410c-ade6-c2cf7a48e363-catalog-content\") pod \"9b287031-510c-410c-ade6-c2cf7a48e363\" (UID: \"9b287031-510c-410c-ade6-c2cf7a48e363\") " Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.904499 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blpp8\" (UniqueName: \"kubernetes.io/projected/9b287031-510c-410c-ade6-c2cf7a48e363-kube-api-access-blpp8\") pod \"9b287031-510c-410c-ade6-c2cf7a48e363\" (UID: \"9b287031-510c-410c-ade6-c2cf7a48e363\") " Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.904536 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b287031-510c-410c-ade6-c2cf7a48e363-utilities\") pod \"9b287031-510c-410c-ade6-c2cf7a48e363\" (UID: \"9b287031-510c-410c-ade6-c2cf7a48e363\") " Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.905803 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b287031-510c-410c-ade6-c2cf7a48e363-utilities" (OuterVolumeSpecName: "utilities") pod "9b287031-510c-410c-ade6-c2cf7a48e363" (UID: "9b287031-510c-410c-ade6-c2cf7a48e363"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.914210 4835 scope.go:117] "RemoveContainer" containerID="88dbcbce0ef58fe86692727ce37088f800bb38d21d5cc849ba2028f877e33b33" Feb 01 07:25:42 crc kubenswrapper[4835]: E0201 07:25:42.915085 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88dbcbce0ef58fe86692727ce37088f800bb38d21d5cc849ba2028f877e33b33\": container with ID starting with 88dbcbce0ef58fe86692727ce37088f800bb38d21d5cc849ba2028f877e33b33 not found: ID does not exist" containerID="88dbcbce0ef58fe86692727ce37088f800bb38d21d5cc849ba2028f877e33b33" Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.915151 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88dbcbce0ef58fe86692727ce37088f800bb38d21d5cc849ba2028f877e33b33"} err="failed to get container status \"88dbcbce0ef58fe86692727ce37088f800bb38d21d5cc849ba2028f877e33b33\": rpc error: code = NotFound desc = could not find container \"88dbcbce0ef58fe86692727ce37088f800bb38d21d5cc849ba2028f877e33b33\": container with ID starting with 88dbcbce0ef58fe86692727ce37088f800bb38d21d5cc849ba2028f877e33b33 not found: ID does not exist" Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.915254 4835 scope.go:117] "RemoveContainer" containerID="5c5372c0af7c9bf826f121a7fb0023e19998440e44d914c7ef5d02b3764dbbbd" Feb 01 07:25:42 crc kubenswrapper[4835]: E0201 07:25:42.915802 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c5372c0af7c9bf826f121a7fb0023e19998440e44d914c7ef5d02b3764dbbbd\": container with ID starting with 5c5372c0af7c9bf826f121a7fb0023e19998440e44d914c7ef5d02b3764dbbbd not found: ID does not exist" containerID="5c5372c0af7c9bf826f121a7fb0023e19998440e44d914c7ef5d02b3764dbbbd" Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.915846 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c5372c0af7c9bf826f121a7fb0023e19998440e44d914c7ef5d02b3764dbbbd"} err="failed to get container status \"5c5372c0af7c9bf826f121a7fb0023e19998440e44d914c7ef5d02b3764dbbbd\": rpc error: code = NotFound desc = could not find container \"5c5372c0af7c9bf826f121a7fb0023e19998440e44d914c7ef5d02b3764dbbbd\": container with ID starting with 5c5372c0af7c9bf826f121a7fb0023e19998440e44d914c7ef5d02b3764dbbbd not found: ID does not exist" Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.915875 4835 scope.go:117] "RemoveContainer" containerID="3e7152183a0a34ef6c3548c8ea64fd3446214efac3b2ff0829cdbc79609fea6f" Feb 01 07:25:42 crc kubenswrapper[4835]: E0201 07:25:42.916143 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e7152183a0a34ef6c3548c8ea64fd3446214efac3b2ff0829cdbc79609fea6f\": container with ID starting with 3e7152183a0a34ef6c3548c8ea64fd3446214efac3b2ff0829cdbc79609fea6f not found: ID does not exist" containerID="3e7152183a0a34ef6c3548c8ea64fd3446214efac3b2ff0829cdbc79609fea6f" Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.916181 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e7152183a0a34ef6c3548c8ea64fd3446214efac3b2ff0829cdbc79609fea6f"} err="failed to get container status \"3e7152183a0a34ef6c3548c8ea64fd3446214efac3b2ff0829cdbc79609fea6f\": rpc error: code = NotFound desc = could not find container \"3e7152183a0a34ef6c3548c8ea64fd3446214efac3b2ff0829cdbc79609fea6f\": container with ID starting with 3e7152183a0a34ef6c3548c8ea64fd3446214efac3b2ff0829cdbc79609fea6f not found: ID does not exist" Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.924359 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b287031-510c-410c-ade6-c2cf7a48e363-kube-api-access-blpp8" (OuterVolumeSpecName: "kube-api-access-blpp8") pod "9b287031-510c-410c-ade6-c2cf7a48e363" (UID: "9b287031-510c-410c-ade6-c2cf7a48e363"). InnerVolumeSpecName "kube-api-access-blpp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:25:42 crc kubenswrapper[4835]: I0201 07:25:42.944332 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b287031-510c-410c-ade6-c2cf7a48e363-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b287031-510c-410c-ade6-c2cf7a48e363" (UID: "9b287031-510c-410c-ade6-c2cf7a48e363"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.006665 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b287031-510c-410c-ade6-c2cf7a48e363-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.006711 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blpp8\" (UniqueName: \"kubernetes.io/projected/9b287031-510c-410c-ade6-c2cf7a48e363-kube-api-access-blpp8\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.006727 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b287031-510c-410c-ade6-c2cf7a48e363-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.111507 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.189389 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlf77"] Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.193471 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tlf77"] Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.211863 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-session\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.211948 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-audit-dir\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.211987 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-router-certs\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.212134 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-error\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.212185 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.212860 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-trusted-ca-bundle\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.212986 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-login\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.213030 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-service-ca\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.213126 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-provider-selection\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.213174 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-serving-cert\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.213218 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-ocp-branding-template\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.213265 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-idp-0-file-data\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.213312 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-cliconfig\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.213352 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-audit-policies\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.213386 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nptzx\" (UniqueName: \"kubernetes.io/projected/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-kube-api-access-nptzx\") pod \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\" (UID: \"62724c3f-5c92-4e77-ba3a-0f6b7215f48a\") " Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.214022 4835 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.214338 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.215258 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.217117 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.218880 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.219086 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-kube-api-access-nptzx" (OuterVolumeSpecName: "kube-api-access-nptzx") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "kube-api-access-nptzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.219886 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.220555 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.222026 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.222780 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.223024 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.223182 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.223535 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.224061 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "62724c3f-5c92-4e77-ba3a-0f6b7215f48a" (UID: "62724c3f-5c92-4e77-ba3a-0f6b7215f48a"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.315438 4835 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.315512 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nptzx\" (UniqueName: \"kubernetes.io/projected/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-kube-api-access-nptzx\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.315541 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.315567 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.315595 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.315619 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.315740 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.315766 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.315791 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.315816 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.315839 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.315863 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.315892 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/62724c3f-5c92-4e77-ba3a-0f6b7215f48a-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.580220 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b287031-510c-410c-ade6-c2cf7a48e363" path="/var/lib/kubelet/pods/9b287031-510c-410c-ade6-c2cf7a48e363/volumes" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.863605 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" event={"ID":"62724c3f-5c92-4e77-ba3a-0f6b7215f48a","Type":"ContainerDied","Data":"b228e669bd5b200a2abbd929c9ec6fc4843ea07663488a746bc7f94dc855f949"} Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.863682 4835 scope.go:117] "RemoveContainer" containerID="3ce1b71be758dd076de182606cb238305ec470a936ab71da41c867e65c4d55e4" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.863725 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tkff4" Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.893295 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tkff4"] Feb 01 07:25:43 crc kubenswrapper[4835]: I0201 07:25:43.901139 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tkff4"] Feb 01 07:25:45 crc kubenswrapper[4835]: I0201 07:25:45.577307 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62724c3f-5c92-4e77-ba3a-0f6b7215f48a" path="/var/lib/kubelet/pods/62724c3f-5c92-4e77-ba3a-0f6b7215f48a/volumes" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.011921 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-666545c866-m8scc"] Feb 01 07:25:50 crc kubenswrapper[4835]: E0201 07:25:50.012362 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b287031-510c-410c-ade6-c2cf7a48e363" containerName="extract-utilities" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012376 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b287031-510c-410c-ade6-c2cf7a48e363" containerName="extract-utilities" Feb 01 07:25:50 crc kubenswrapper[4835]: E0201 07:25:50.012389 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f562492e-dbf9-440e-978a-603956fc464e" containerName="registry-server" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012397 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f562492e-dbf9-440e-978a-603956fc464e" containerName="registry-server" Feb 01 07:25:50 crc kubenswrapper[4835]: E0201 07:25:50.012424 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" containerName="registry-server" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012433 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" containerName="registry-server" Feb 01 07:25:50 crc kubenswrapper[4835]: E0201 07:25:50.012446 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62724c3f-5c92-4e77-ba3a-0f6b7215f48a" containerName="oauth-openshift" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012453 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="62724c3f-5c92-4e77-ba3a-0f6b7215f48a" containerName="oauth-openshift" Feb 01 07:25:50 crc kubenswrapper[4835]: E0201 07:25:50.012462 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" containerName="extract-content" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012469 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" containerName="extract-content" Feb 01 07:25:50 crc kubenswrapper[4835]: E0201 07:25:50.012482 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc8c2486-a383-48cb-aefe-1610bc1c534f" containerName="extract-utilities" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012491 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc8c2486-a383-48cb-aefe-1610bc1c534f" containerName="extract-utilities" Feb 01 07:25:50 crc kubenswrapper[4835]: E0201 07:25:50.012504 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" containerName="extract-utilities" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012511 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" containerName="extract-utilities" Feb 01 07:25:50 crc kubenswrapper[4835]: E0201 07:25:50.012522 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b287031-510c-410c-ade6-c2cf7a48e363" containerName="extract-content" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012530 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b287031-510c-410c-ade6-c2cf7a48e363" containerName="extract-content" Feb 01 07:25:50 crc kubenswrapper[4835]: E0201 07:25:50.012541 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f562492e-dbf9-440e-978a-603956fc464e" containerName="extract-content" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012548 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f562492e-dbf9-440e-978a-603956fc464e" containerName="extract-content" Feb 01 07:25:50 crc kubenswrapper[4835]: E0201 07:25:50.012560 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b287031-510c-410c-ade6-c2cf7a48e363" containerName="registry-server" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012567 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b287031-510c-410c-ade6-c2cf7a48e363" containerName="registry-server" Feb 01 07:25:50 crc kubenswrapper[4835]: E0201 07:25:50.012577 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc8c2486-a383-48cb-aefe-1610bc1c534f" containerName="extract-content" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012584 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc8c2486-a383-48cb-aefe-1610bc1c534f" containerName="extract-content" Feb 01 07:25:50 crc kubenswrapper[4835]: E0201 07:25:50.012595 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc8c2486-a383-48cb-aefe-1610bc1c534f" containerName="registry-server" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012602 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc8c2486-a383-48cb-aefe-1610bc1c534f" containerName="registry-server" Feb 01 07:25:50 crc kubenswrapper[4835]: E0201 07:25:50.012612 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f562492e-dbf9-440e-978a-603956fc464e" containerName="extract-utilities" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012620 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f562492e-dbf9-440e-978a-603956fc464e" containerName="extract-utilities" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012724 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3a136e2-3caa-4ed0-960a-6b6a0fdef39e" containerName="registry-server" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012739 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b287031-510c-410c-ade6-c2cf7a48e363" containerName="registry-server" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012752 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc8c2486-a383-48cb-aefe-1610bc1c534f" containerName="registry-server" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012762 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="62724c3f-5c92-4e77-ba3a-0f6b7215f48a" containerName="oauth-openshift" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.012773 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f562492e-dbf9-440e-978a-603956fc464e" containerName="registry-server" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.013158 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.014758 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.017383 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.017968 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.018077 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.018198 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.018465 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.018516 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.018560 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.018584 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.018589 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.018740 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.018773 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.029961 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.030953 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.032588 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-666545c866-m8scc"] Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.035820 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099350 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-session\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099423 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099466 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-router-certs\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099500 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ed3a6b84-13c0-4752-8860-7c21ade20300-audit-policies\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099529 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed3a6b84-13c0-4752-8860-7c21ade20300-audit-dir\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099623 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099669 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4brzb\" (UniqueName: \"kubernetes.io/projected/ed3a6b84-13c0-4752-8860-7c21ade20300-kube-api-access-4brzb\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099735 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099768 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-cliconfig\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099811 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-user-template-login\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099847 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099874 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-serving-cert\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099934 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-user-template-error\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.099992 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-service-ca\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.201344 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-cliconfig\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.201682 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-user-template-login\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.201812 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.201924 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-serving-cert\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.202024 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-user-template-error\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.202141 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-service-ca\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.202262 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-session\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.202364 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.202510 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ed3a6b84-13c0-4752-8860-7c21ade20300-audit-policies\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.202609 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-router-certs\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.202710 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed3a6b84-13c0-4752-8860-7c21ade20300-audit-dir\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.202920 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.203017 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4brzb\" (UniqueName: \"kubernetes.io/projected/ed3a6b84-13c0-4752-8860-7c21ade20300-kube-api-access-4brzb\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.203120 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.203917 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-service-ca\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.203937 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ed3a6b84-13c0-4752-8860-7c21ade20300-audit-policies\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.203031 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ed3a6b84-13c0-4752-8860-7c21ade20300-audit-dir\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.205132 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-cliconfig\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.205269 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.207894 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.208487 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-router-certs\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.210188 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.210647 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-user-template-error\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.210660 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.211042 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-user-template-login\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.211229 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-serving-cert\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.229853 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ed3a6b84-13c0-4752-8860-7c21ade20300-v4-0-config-system-session\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.232582 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4brzb\" (UniqueName: \"kubernetes.io/projected/ed3a6b84-13c0-4752-8860-7c21ade20300-kube-api-access-4brzb\") pod \"oauth-openshift-666545c866-m8scc\" (UID: \"ed3a6b84-13c0-4752-8860-7c21ade20300\") " pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.330398 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.767250 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-666545c866-m8scc"] Feb 01 07:25:50 crc kubenswrapper[4835]: I0201 07:25:50.907487 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-666545c866-m8scc" event={"ID":"ed3a6b84-13c0-4752-8860-7c21ade20300","Type":"ContainerStarted","Data":"420e8a96bdff4bd6618da9f051787277099fe11bdc768c8b035a266c0d084484"} Feb 01 07:25:51 crc kubenswrapper[4835]: I0201 07:25:51.914018 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-666545c866-m8scc" event={"ID":"ed3a6b84-13c0-4752-8860-7c21ade20300","Type":"ContainerStarted","Data":"ed77361171f748228589a117c7bf43f180d816bc8a8e9aa1299d4ca4762f6ca9"} Feb 01 07:25:51 crc kubenswrapper[4835]: I0201 07:25:51.914429 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:51 crc kubenswrapper[4835]: I0201 07:25:51.921355 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-666545c866-m8scc" Feb 01 07:25:51 crc kubenswrapper[4835]: I0201 07:25:51.942595 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-666545c866-m8scc" podStartSLOduration=34.942571979 podStartE2EDuration="34.942571979s" podCreationTimestamp="2026-02-01 07:25:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:25:51.939694686 +0000 UTC m=+225.060131160" watchObservedRunningTime="2026-02-01 07:25:51.942571979 +0000 UTC m=+225.063008453" Feb 01 07:25:55 crc kubenswrapper[4835]: I0201 07:25:55.191699 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:25:55 crc kubenswrapper[4835]: I0201 07:25:55.192179 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:25:55 crc kubenswrapper[4835]: I0201 07:25:55.192277 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:25:55 crc kubenswrapper[4835]: I0201 07:25:55.193632 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5"} pod="openshift-machine-config-operator/machine-config-daemon-wdt78" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 07:25:55 crc kubenswrapper[4835]: I0201 07:25:55.193814 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" containerID="cri-o://b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5" gracePeriod=600 Feb 01 07:25:55 crc kubenswrapper[4835]: I0201 07:25:55.946009 4835 generic.go:334] "Generic (PLEG): container finished" podID="303c450e-4b2d-4908-84e6-df8b444ed640" containerID="b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5" exitCode=0 Feb 01 07:25:55 crc kubenswrapper[4835]: I0201 07:25:55.946497 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerDied","Data":"b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5"} Feb 01 07:25:56 crc kubenswrapper[4835]: I0201 07:25:56.952969 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"9e3104eb77be3b50140e525cdfbf7f55a456b28fd34136df6dc0b6920b3a97bf"} Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.601735 4835 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.603577 4835 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.603669 4835 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.603683 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: E0201 07:26:01.603904 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.603934 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2" gracePeriod=15 Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604019 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4" gracePeriod=15 Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.603964 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 01 07:26:01 crc kubenswrapper[4835]: E0201 07:26:01.604155 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604162 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604076 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54" gracePeriod=15 Feb 01 07:26:01 crc kubenswrapper[4835]: E0201 07:26:01.604174 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604181 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 01 07:26:01 crc kubenswrapper[4835]: E0201 07:26:01.604191 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604197 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 01 07:26:01 crc kubenswrapper[4835]: E0201 07:26:01.604205 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604211 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 01 07:26:01 crc kubenswrapper[4835]: E0201 07:26:01.604221 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604227 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 01 07:26:01 crc kubenswrapper[4835]: E0201 07:26:01.604236 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604241 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604222 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9" gracePeriod=15 Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.603981 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94" gracePeriod=15 Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604601 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604614 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604625 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604631 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604638 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.604646 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.616644 4835 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.681465 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.681537 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.681578 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.681612 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.681633 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.681649 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.681675 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.681704 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.782469 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.782813 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.782856 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.782881 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.782900 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.782931 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.782931 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.782971 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.782992 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.783005 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.782948 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.782937 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.782683 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.783036 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.783102 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.783204 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.986600 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.988185 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.989072 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9" exitCode=0 Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.989114 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4" exitCode=0 Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.989132 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94" exitCode=0 Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.989146 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54" exitCode=2 Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.989251 4835 scope.go:117] "RemoveContainer" containerID="39bf8eb611f6b4328ab4f1c1e77f6cdf4573113ace1b1e04aaf429f3e87dac88" Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.991112 4835 generic.go:334] "Generic (PLEG): container finished" podID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" containerID="3a62854f07efe9ee61bbc8b6cf4f08d0ff0e9a200d15a47492c6bdf618532148" exitCode=0 Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.991148 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c9b454b8-f758-43d4-bd2b-93ebc807e06e","Type":"ContainerDied","Data":"3a62854f07efe9ee61bbc8b6cf4f08d0ff0e9a200d15a47492c6bdf618532148"} Feb 01 07:26:01 crc kubenswrapper[4835]: I0201 07:26:01.992000 4835 status_manager.go:851] "Failed to get status for pod" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.002465 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.348810 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.350032 4835 status_manager.go:851] "Failed to get status for pod" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.406461 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9b454b8-f758-43d4-bd2b-93ebc807e06e-kube-api-access\") pod \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\" (UID: \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\") " Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.406656 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9b454b8-f758-43d4-bd2b-93ebc807e06e-kubelet-dir\") pod \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\" (UID: \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\") " Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.406752 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c9b454b8-f758-43d4-bd2b-93ebc807e06e-var-lock\") pod \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\" (UID: \"c9b454b8-f758-43d4-bd2b-93ebc807e06e\") " Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.406784 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9b454b8-f758-43d4-bd2b-93ebc807e06e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c9b454b8-f758-43d4-bd2b-93ebc807e06e" (UID: "c9b454b8-f758-43d4-bd2b-93ebc807e06e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.406907 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9b454b8-f758-43d4-bd2b-93ebc807e06e-var-lock" (OuterVolumeSpecName: "var-lock") pod "c9b454b8-f758-43d4-bd2b-93ebc807e06e" (UID: "c9b454b8-f758-43d4-bd2b-93ebc807e06e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.407163 4835 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9b454b8-f758-43d4-bd2b-93ebc807e06e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.407198 4835 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c9b454b8-f758-43d4-bd2b-93ebc807e06e-var-lock\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.412503 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9b454b8-f758-43d4-bd2b-93ebc807e06e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c9b454b8-f758-43d4-bd2b-93ebc807e06e" (UID: "c9b454b8-f758-43d4-bd2b-93ebc807e06e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.508075 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9b454b8-f758-43d4-bd2b-93ebc807e06e-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.967551 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.969486 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.970334 4835 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:03 crc kubenswrapper[4835]: I0201 07:26:03.970694 4835 status_manager.go:851] "Failed to get status for pod" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.011307 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.013339 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2" exitCode=0 Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.013432 4835 scope.go:117] "RemoveContainer" containerID="7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.013750 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.016660 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c9b454b8-f758-43d4-bd2b-93ebc807e06e","Type":"ContainerDied","Data":"2997edb8ab02ca7e2da0f4120bdf3140a5e44974f1c0d9270cf560bcceec34c4"} Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.016710 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.016712 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2997edb8ab02ca7e2da0f4120bdf3140a5e44974f1c0d9270cf560bcceec34c4" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.024682 4835 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.025338 4835 status_manager.go:851] "Failed to get status for pod" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.032464 4835 scope.go:117] "RemoveContainer" containerID="0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.057433 4835 scope.go:117] "RemoveContainer" containerID="02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.076217 4835 scope.go:117] "RemoveContainer" containerID="6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.090153 4835 scope.go:117] "RemoveContainer" containerID="7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.107846 4835 scope.go:117] "RemoveContainer" containerID="fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.119568 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.119602 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.119622 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.119783 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.119818 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.119879 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.127882 4835 scope.go:117] "RemoveContainer" containerID="7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9" Feb 01 07:26:04 crc kubenswrapper[4835]: E0201 07:26:04.128399 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\": container with ID starting with 7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9 not found: ID does not exist" containerID="7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.128458 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9"} err="failed to get container status \"7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\": rpc error: code = NotFound desc = could not find container \"7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9\": container with ID starting with 7b18ab2c73314e22dcc5100b6d0e9934ac246f65852910b2409efb79fe0562b9 not found: ID does not exist" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.128490 4835 scope.go:117] "RemoveContainer" containerID="0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4" Feb 01 07:26:04 crc kubenswrapper[4835]: E0201 07:26:04.128953 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\": container with ID starting with 0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4 not found: ID does not exist" containerID="0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.129021 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4"} err="failed to get container status \"0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\": rpc error: code = NotFound desc = could not find container \"0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4\": container with ID starting with 0633bc494f0d2b54c1e0e750ea15e93948247459eeba0e97911b614a2c69aaf4 not found: ID does not exist" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.129066 4835 scope.go:117] "RemoveContainer" containerID="02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94" Feb 01 07:26:04 crc kubenswrapper[4835]: E0201 07:26:04.129475 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\": container with ID starting with 02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94 not found: ID does not exist" containerID="02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.129537 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94"} err="failed to get container status \"02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\": rpc error: code = NotFound desc = could not find container \"02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94\": container with ID starting with 02c711adecccab148cf30aeb289a57e4f5a3348634c31d66bd17ab0519015b94 not found: ID does not exist" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.129560 4835 scope.go:117] "RemoveContainer" containerID="6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54" Feb 01 07:26:04 crc kubenswrapper[4835]: E0201 07:26:04.129898 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\": container with ID starting with 6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54 not found: ID does not exist" containerID="6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.129929 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54"} err="failed to get container status \"6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\": rpc error: code = NotFound desc = could not find container \"6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54\": container with ID starting with 6de860008036220502edd9adaa4e636db4c95d9bdf66b3be79e35d81776ecd54 not found: ID does not exist" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.129950 4835 scope.go:117] "RemoveContainer" containerID="7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2" Feb 01 07:26:04 crc kubenswrapper[4835]: E0201 07:26:04.131582 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\": container with ID starting with 7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2 not found: ID does not exist" containerID="7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.131626 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2"} err="failed to get container status \"7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\": rpc error: code = NotFound desc = could not find container \"7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2\": container with ID starting with 7edada978902eec037d314bc4407546d79213da9c322bcbba54eb41aa3057bc2 not found: ID does not exist" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.131653 4835 scope.go:117] "RemoveContainer" containerID="fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17" Feb 01 07:26:04 crc kubenswrapper[4835]: E0201 07:26:04.131945 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\": container with ID starting with fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17 not found: ID does not exist" containerID="fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.131979 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17"} err="failed to get container status \"fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\": rpc error: code = NotFound desc = could not find container \"fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17\": container with ID starting with fda63be7161fbbdf5e7d71f3643525a52125b3c9783d4b9f9fff7e687775cf17 not found: ID does not exist" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.221513 4835 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.221562 4835 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.221582 4835 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.345803 4835 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:04 crc kubenswrapper[4835]: I0201 07:26:04.347383 4835 status_manager.go:851] "Failed to get status for pod" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:05 crc kubenswrapper[4835]: I0201 07:26:05.576456 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 01 07:26:06 crc kubenswrapper[4835]: E0201 07:26:06.630258 4835 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:06 crc kubenswrapper[4835]: I0201 07:26:06.630839 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:06 crc kubenswrapper[4835]: W0201 07:26:06.664134 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-e3f00328f377ed3182987ee1fc3cb2b673847b03106247434b8862766fc0c12e WatchSource:0}: Error finding container e3f00328f377ed3182987ee1fc3cb2b673847b03106247434b8862766fc0c12e: Status 404 returned error can't find the container with id e3f00328f377ed3182987ee1fc3cb2b673847b03106247434b8862766fc0c12e Feb 01 07:26:06 crc kubenswrapper[4835]: E0201 07:26:06.668900 4835 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.98:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18900ea7a0495361 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-01 07:26:06.668092257 +0000 UTC m=+239.788528701,LastTimestamp:2026-02-01 07:26:06.668092257 +0000 UTC m=+239.788528701,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 01 07:26:07 crc kubenswrapper[4835]: I0201 07:26:07.043164 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e3f00328f377ed3182987ee1fc3cb2b673847b03106247434b8862766fc0c12e"} Feb 01 07:26:07 crc kubenswrapper[4835]: E0201 07:26:07.275963 4835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:07 crc kubenswrapper[4835]: E0201 07:26:07.276734 4835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:07 crc kubenswrapper[4835]: E0201 07:26:07.277252 4835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:07 crc kubenswrapper[4835]: E0201 07:26:07.277643 4835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:07 crc kubenswrapper[4835]: E0201 07:26:07.277981 4835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:07 crc kubenswrapper[4835]: I0201 07:26:07.278019 4835 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 01 07:26:07 crc kubenswrapper[4835]: E0201 07:26:07.278289 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="200ms" Feb 01 07:26:07 crc kubenswrapper[4835]: E0201 07:26:07.479635 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="400ms" Feb 01 07:26:07 crc kubenswrapper[4835]: I0201 07:26:07.571202 4835 status_manager.go:851] "Failed to get status for pod" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:07 crc kubenswrapper[4835]: E0201 07:26:07.880390 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="800ms" Feb 01 07:26:08 crc kubenswrapper[4835]: I0201 07:26:08.049707 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"4c8e2b44520104ec8ca2ec72d244a8a67a0f39aa65f3b9ab96fedb0af4e6ca17"} Feb 01 07:26:08 crc kubenswrapper[4835]: E0201 07:26:08.050367 4835 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:08 crc kubenswrapper[4835]: I0201 07:26:08.050450 4835 status_manager.go:851] "Failed to get status for pod" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:08 crc kubenswrapper[4835]: E0201 07:26:08.682662 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="1.6s" Feb 01 07:26:09 crc kubenswrapper[4835]: E0201 07:26:09.056048 4835 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:10 crc kubenswrapper[4835]: E0201 07:26:10.284315 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="3.2s" Feb 01 07:26:13 crc kubenswrapper[4835]: E0201 07:26:13.485296 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="6.4s" Feb 01 07:26:15 crc kubenswrapper[4835]: I0201 07:26:15.115317 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 01 07:26:15 crc kubenswrapper[4835]: I0201 07:26:15.115901 4835 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976" exitCode=1 Feb 01 07:26:15 crc kubenswrapper[4835]: I0201 07:26:15.115973 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976"} Feb 01 07:26:15 crc kubenswrapper[4835]: I0201 07:26:15.116850 4835 scope.go:117] "RemoveContainer" containerID="611b9e3b2a2b3b34398db7e1a341a74ed5155a600fe67a2e937244ef47c46976" Feb 01 07:26:15 crc kubenswrapper[4835]: I0201 07:26:15.117312 4835 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:15 crc kubenswrapper[4835]: I0201 07:26:15.119207 4835 status_manager.go:851] "Failed to get status for pod" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:15 crc kubenswrapper[4835]: E0201 07:26:15.355964 4835 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.98:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18900ea7a0495361 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-01 07:26:06.668092257 +0000 UTC m=+239.788528701,LastTimestamp:2026-02-01 07:26:06.668092257 +0000 UTC m=+239.788528701,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 01 07:26:16 crc kubenswrapper[4835]: I0201 07:26:16.129964 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 01 07:26:16 crc kubenswrapper[4835]: I0201 07:26:16.130029 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5364c9f6974aec68b47b5c8588927fc1bdaf21f74470ab2cc89a5b9b958550d1"} Feb 01 07:26:16 crc kubenswrapper[4835]: I0201 07:26:16.131272 4835 status_manager.go:851] "Failed to get status for pod" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:16 crc kubenswrapper[4835]: I0201 07:26:16.135902 4835 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:16 crc kubenswrapper[4835]: I0201 07:26:16.566160 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:16 crc kubenswrapper[4835]: I0201 07:26:16.567235 4835 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:16 crc kubenswrapper[4835]: I0201 07:26:16.567806 4835 status_manager.go:851] "Failed to get status for pod" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:16 crc kubenswrapper[4835]: I0201 07:26:16.595567 4835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="87ff5368-06f9-4f47-b5bb-e5916283dec7" Feb 01 07:26:16 crc kubenswrapper[4835]: I0201 07:26:16.595612 4835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="87ff5368-06f9-4f47-b5bb-e5916283dec7" Feb 01 07:26:16 crc kubenswrapper[4835]: E0201 07:26:16.596114 4835 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:16 crc kubenswrapper[4835]: I0201 07:26:16.596796 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:16 crc kubenswrapper[4835]: W0201 07:26:16.630487 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-424b95721b8ff83eb1d7a89e372612a506e3ed06544a4575ea99d675e8375e10 WatchSource:0}: Error finding container 424b95721b8ff83eb1d7a89e372612a506e3ed06544a4575ea99d675e8375e10: Status 404 returned error can't find the container with id 424b95721b8ff83eb1d7a89e372612a506e3ed06544a4575ea99d675e8375e10 Feb 01 07:26:17 crc kubenswrapper[4835]: I0201 07:26:17.140723 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"424b95721b8ff83eb1d7a89e372612a506e3ed06544a4575ea99d675e8375e10"} Feb 01 07:26:17 crc kubenswrapper[4835]: I0201 07:26:17.576679 4835 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:17 crc kubenswrapper[4835]: I0201 07:26:17.577564 4835 status_manager.go:851] "Failed to get status for pod" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:17 crc kubenswrapper[4835]: I0201 07:26:17.578225 4835 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:17 crc kubenswrapper[4835]: E0201 07:26:17.626937 4835 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" volumeName="registry-storage" Feb 01 07:26:18 crc kubenswrapper[4835]: I0201 07:26:18.149500 4835 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="8f5ee58ffbe76f6e65ebe195b8d37780bccd06ec6bb269f7ccb020979e4a5319" exitCode=0 Feb 01 07:26:18 crc kubenswrapper[4835]: I0201 07:26:18.149562 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"8f5ee58ffbe76f6e65ebe195b8d37780bccd06ec6bb269f7ccb020979e4a5319"} Feb 01 07:26:18 crc kubenswrapper[4835]: I0201 07:26:18.149973 4835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="87ff5368-06f9-4f47-b5bb-e5916283dec7" Feb 01 07:26:18 crc kubenswrapper[4835]: I0201 07:26:18.150005 4835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="87ff5368-06f9-4f47-b5bb-e5916283dec7" Feb 01 07:26:18 crc kubenswrapper[4835]: E0201 07:26:18.150665 4835 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:18 crc kubenswrapper[4835]: I0201 07:26:18.150705 4835 status_manager.go:851] "Failed to get status for pod" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:18 crc kubenswrapper[4835]: I0201 07:26:18.152926 4835 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:18 crc kubenswrapper[4835]: I0201 07:26:18.153246 4835 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Feb 01 07:26:19 crc kubenswrapper[4835]: I0201 07:26:19.157776 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"41018aa6d57af6044d17ccc0cd9b1534c7b683e85dd94c4545cf962c6e45ce32"} Feb 01 07:26:19 crc kubenswrapper[4835]: I0201 07:26:19.157828 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3fbe5aa7ad6849137c02b3975e4460c2e6b49cf85f8c2c9aaf5fbaddc98d6847"} Feb 01 07:26:20 crc kubenswrapper[4835]: I0201 07:26:20.166190 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"531fdfc73b147d204745e7871dce2d6953f2d35502e7aab35e3c8a76e339f3b0"} Feb 01 07:26:20 crc kubenswrapper[4835]: I0201 07:26:20.166441 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:20 crc kubenswrapper[4835]: I0201 07:26:20.166451 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"56450161e526e8ef8619f7267049ecf198eb6f7b0e0ba0ae4630b39bf76fc521"} Feb 01 07:26:20 crc kubenswrapper[4835]: I0201 07:26:20.166462 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5b5e77a08d11bf0fafb64750d1d51714e22d6d88f0f2d58561f93feddfed02d5"} Feb 01 07:26:20 crc kubenswrapper[4835]: I0201 07:26:20.166496 4835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="87ff5368-06f9-4f47-b5bb-e5916283dec7" Feb 01 07:26:20 crc kubenswrapper[4835]: I0201 07:26:20.166513 4835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="87ff5368-06f9-4f47-b5bb-e5916283dec7" Feb 01 07:26:20 crc kubenswrapper[4835]: I0201 07:26:20.653471 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:26:21 crc kubenswrapper[4835]: I0201 07:26:21.597954 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:21 crc kubenswrapper[4835]: I0201 07:26:21.598665 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:21 crc kubenswrapper[4835]: I0201 07:26:21.612009 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:23 crc kubenswrapper[4835]: I0201 07:26:23.376748 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:26:23 crc kubenswrapper[4835]: I0201 07:26:23.386074 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:26:25 crc kubenswrapper[4835]: I0201 07:26:25.200910 4835 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:25 crc kubenswrapper[4835]: I0201 07:26:25.240185 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"87ff5368-06f9-4f47-b5bb-e5916283dec7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:26:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:26:18Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:26:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-01T07:26:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3fbe5aa7ad6849137c02b3975e4460c2e6b49cf85f8c2c9aaf5fbaddc98d6847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b5e77a08d11bf0fafb64750d1d51714e22d6d88f0f2d58561f93feddfed02d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:26:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41018aa6d57af6044d17ccc0cd9b1534c7b683e85dd94c4545cf962c6e45ce32\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:26:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://531fdfc73b147d204745e7871dce2d6953f2d35502e7aab35e3c8a76e339f3b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:26:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56450161e526e8ef8619f7267049ecf198eb6f7b0e0ba0ae4630b39bf76fc521\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-01T07:26:19Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f5ee58ffbe76f6e65ebe195b8d37780bccd06ec6bb269f7ccb020979e4a5319\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f5ee58ffbe76f6e65ebe195b8d37780bccd06ec6bb269f7ccb020979e4a5319\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-01T07:26:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-01T07:26:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}]}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Pod \"kube-apiserver-crc\" is invalid: metadata.uid: Invalid value: \"87ff5368-06f9-4f47-b5bb-e5916283dec7\": field is immutable" Feb 01 07:26:25 crc kubenswrapper[4835]: I0201 07:26:25.286009 4835 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="695946f9-9c64-47c4-aada-771c48dbcef9" Feb 01 07:26:26 crc kubenswrapper[4835]: I0201 07:26:26.213203 4835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="87ff5368-06f9-4f47-b5bb-e5916283dec7" Feb 01 07:26:26 crc kubenswrapper[4835]: I0201 07:26:26.213525 4835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="87ff5368-06f9-4f47-b5bb-e5916283dec7" Feb 01 07:26:26 crc kubenswrapper[4835]: I0201 07:26:26.219481 4835 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="695946f9-9c64-47c4-aada-771c48dbcef9" Feb 01 07:26:26 crc kubenswrapper[4835]: I0201 07:26:26.219757 4835 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://3fbe5aa7ad6849137c02b3975e4460c2e6b49cf85f8c2c9aaf5fbaddc98d6847" Feb 01 07:26:26 crc kubenswrapper[4835]: I0201 07:26:26.219789 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:27 crc kubenswrapper[4835]: I0201 07:26:27.218974 4835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="87ff5368-06f9-4f47-b5bb-e5916283dec7" Feb 01 07:26:27 crc kubenswrapper[4835]: I0201 07:26:27.219015 4835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="87ff5368-06f9-4f47-b5bb-e5916283dec7" Feb 01 07:26:27 crc kubenswrapper[4835]: I0201 07:26:27.223245 4835 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="695946f9-9c64-47c4-aada-771c48dbcef9" Feb 01 07:26:30 crc kubenswrapper[4835]: I0201 07:26:30.658202 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 01 07:26:35 crc kubenswrapper[4835]: I0201 07:26:35.568986 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 01 07:26:35 crc kubenswrapper[4835]: I0201 07:26:35.734813 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 01 07:26:35 crc kubenswrapper[4835]: I0201 07:26:35.860578 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 01 07:26:35 crc kubenswrapper[4835]: I0201 07:26:35.897009 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 01 07:26:35 crc kubenswrapper[4835]: I0201 07:26:35.972318 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 01 07:26:36 crc kubenswrapper[4835]: I0201 07:26:36.075677 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 01 07:26:36 crc kubenswrapper[4835]: I0201 07:26:36.166058 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 01 07:26:36 crc kubenswrapper[4835]: I0201 07:26:36.976012 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 01 07:26:37 crc kubenswrapper[4835]: I0201 07:26:37.055504 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 01 07:26:37 crc kubenswrapper[4835]: I0201 07:26:37.143271 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 01 07:26:37 crc kubenswrapper[4835]: I0201 07:26:37.304504 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 01 07:26:37 crc kubenswrapper[4835]: I0201 07:26:37.412822 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 01 07:26:37 crc kubenswrapper[4835]: I0201 07:26:37.423944 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 01 07:26:37 crc kubenswrapper[4835]: I0201 07:26:37.447820 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 01 07:26:37 crc kubenswrapper[4835]: I0201 07:26:37.529751 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 01 07:26:37 crc kubenswrapper[4835]: I0201 07:26:37.582848 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 01 07:26:37 crc kubenswrapper[4835]: I0201 07:26:37.768353 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 01 07:26:37 crc kubenswrapper[4835]: I0201 07:26:37.819121 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 01 07:26:37 crc kubenswrapper[4835]: I0201 07:26:37.945484 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.103993 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.125595 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.261331 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.264347 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.267272 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.283643 4835 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.291604 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.291711 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.300640 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.316348 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.316331998 podStartE2EDuration="13.316331998s" podCreationTimestamp="2026-02-01 07:26:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:26:38.312990788 +0000 UTC m=+271.433427262" watchObservedRunningTime="2026-02-01 07:26:38.316331998 +0000 UTC m=+271.436768432" Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.402470 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.621957 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.677657 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.809878 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 01 07:26:38 crc kubenswrapper[4835]: I0201 07:26:38.869059 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.062702 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.069929 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.084475 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.199639 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.211407 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.258941 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.263858 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.272712 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.273694 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.286368 4835 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.288193 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.320280 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.335464 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.372359 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.419231 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.451524 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.464274 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.484315 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.559336 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.594637 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.601170 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.636512 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.702731 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.751793 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.753052 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.798342 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.803492 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.926747 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.933100 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.952937 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.965374 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.966838 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 01 07:26:39 crc kubenswrapper[4835]: I0201 07:26:39.979402 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 01 07:26:40 crc kubenswrapper[4835]: I0201 07:26:40.047917 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 01 07:26:40 crc kubenswrapper[4835]: I0201 07:26:40.137838 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 01 07:26:40 crc kubenswrapper[4835]: I0201 07:26:40.329563 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 01 07:26:40 crc kubenswrapper[4835]: I0201 07:26:40.331233 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 01 07:26:40 crc kubenswrapper[4835]: I0201 07:26:40.526041 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 01 07:26:40 crc kubenswrapper[4835]: I0201 07:26:40.545989 4835 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 01 07:26:40 crc kubenswrapper[4835]: I0201 07:26:40.567820 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 01 07:26:40 crc kubenswrapper[4835]: I0201 07:26:40.756835 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 01 07:26:40 crc kubenswrapper[4835]: I0201 07:26:40.788502 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 01 07:26:40 crc kubenswrapper[4835]: I0201 07:26:40.791214 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 01 07:26:40 crc kubenswrapper[4835]: I0201 07:26:40.867559 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 01 07:26:40 crc kubenswrapper[4835]: I0201 07:26:40.907034 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 01 07:26:40 crc kubenswrapper[4835]: I0201 07:26:40.988698 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.051239 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.052949 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.055460 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.171922 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.175151 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.282151 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.302771 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.309690 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.360128 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.460091 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.628942 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.671367 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.714340 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.806008 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.839821 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.862140 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.878142 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.974726 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 01 07:26:41 crc kubenswrapper[4835]: I0201 07:26:41.985469 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.048844 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.091839 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.170531 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.183942 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.185600 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.200257 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.213929 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.331375 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.350525 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.480274 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.599348 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.616221 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.632279 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.641138 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.651749 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.684027 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.803022 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.870503 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.936687 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 01 07:26:42 crc kubenswrapper[4835]: I0201 07:26:42.951144 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.024831 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.032139 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.096684 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.262544 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.270029 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.289231 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.319085 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.359372 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.401342 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.453795 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.453954 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.470941 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.471062 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.497523 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.515372 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.668624 4835 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.697630 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.786987 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.787332 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.808590 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.829596 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.917613 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 01 07:26:43 crc kubenswrapper[4835]: I0201 07:26:43.940100 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.013265 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.041572 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.171125 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.182460 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.187288 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.215488 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.232880 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.261498 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.366485 4835 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.388680 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.392221 4835 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.451862 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.487719 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.492842 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.598387 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.618468 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.705607 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.819173 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.823426 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.827899 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 01 07:26:44 crc kubenswrapper[4835]: I0201 07:26:44.847584 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.002494 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.027936 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.094543 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.111439 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.177031 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.188296 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.302070 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.302743 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.423904 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.439175 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.503393 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.515207 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.626057 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 01 07:26:45 crc kubenswrapper[4835]: I0201 07:26:45.963962 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.034966 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.067219 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.074587 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.091683 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.094092 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.139744 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.227215 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.293080 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.303531 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.327922 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.396129 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.433599 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.482988 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.522426 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.600966 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.678559 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.722908 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.725887 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.769929 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.811218 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.847425 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.870145 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.945392 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 01 07:26:46 crc kubenswrapper[4835]: I0201 07:26:46.945434 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.124734 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.130139 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.212460 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.319460 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.342660 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.434024 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.445617 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.485685 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.636906 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.671044 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.752857 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.769240 4835 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.769539 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://4c8e2b44520104ec8ca2ec72d244a8a67a0f39aa65f3b9ab96fedb0af4e6ca17" gracePeriod=5 Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.797577 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.824054 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 01 07:26:47 crc kubenswrapper[4835]: I0201 07:26:47.879093 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 01 07:26:48 crc kubenswrapper[4835]: I0201 07:26:48.200684 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 01 07:26:48 crc kubenswrapper[4835]: I0201 07:26:48.231220 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 01 07:26:48 crc kubenswrapper[4835]: I0201 07:26:48.594316 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 01 07:26:48 crc kubenswrapper[4835]: I0201 07:26:48.822177 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 01 07:26:48 crc kubenswrapper[4835]: I0201 07:26:48.894709 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.026998 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.043071 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.056283 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.133097 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.133797 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.147804 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.232633 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.442617 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.470322 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.494049 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.508745 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.738356 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.740574 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 01 07:26:49 crc kubenswrapper[4835]: I0201 07:26:49.969745 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 01 07:26:50 crc kubenswrapper[4835]: I0201 07:26:50.378684 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 01 07:26:50 crc kubenswrapper[4835]: I0201 07:26:50.424836 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 01 07:26:50 crc kubenswrapper[4835]: I0201 07:26:50.555170 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 01 07:26:50 crc kubenswrapper[4835]: I0201 07:26:50.641650 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 01 07:26:50 crc kubenswrapper[4835]: I0201 07:26:50.703861 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 01 07:26:50 crc kubenswrapper[4835]: I0201 07:26:50.895787 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 01 07:26:51 crc kubenswrapper[4835]: I0201 07:26:51.022319 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 01 07:26:51 crc kubenswrapper[4835]: I0201 07:26:51.048451 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 01 07:26:51 crc kubenswrapper[4835]: I0201 07:26:51.317965 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 01 07:26:51 crc kubenswrapper[4835]: I0201 07:26:51.816696 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 01 07:26:52 crc kubenswrapper[4835]: I0201 07:26:52.772880 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zbfbl"] Feb 01 07:26:52 crc kubenswrapper[4835]: I0201 07:26:52.776136 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zbfbl" podUID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" containerName="registry-server" containerID="cri-o://0eea26ae4bb5a1954f72fbcc75d1e7903480a69577a36065cf6a4254e3efba68" gracePeriod=30 Feb 01 07:26:52 crc kubenswrapper[4835]: I0201 07:26:52.788517 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t677t"] Feb 01 07:26:52 crc kubenswrapper[4835]: I0201 07:26:52.789493 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t677t" podUID="835b2622-9047-4e3a-b019-6f15c5fd4566" containerName="registry-server" containerID="cri-o://e8d75b9cdb3185ff37877ed85d6d3372730274f7dbff223d7ea5c84fe296a601" gracePeriod=30 Feb 01 07:26:52 crc kubenswrapper[4835]: I0201 07:26:52.801776 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mjg6g"] Feb 01 07:26:52 crc kubenswrapper[4835]: I0201 07:26:52.802244 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" podUID="8615180e-fc31-41b2-ad59-5ae2e48af5a2" containerName="marketplace-operator" containerID="cri-o://aec701259e552f23dfcf4e9cf051bfbdb52a72d9c0db034b350a2330451e632f" gracePeriod=30 Feb 01 07:26:52 crc kubenswrapper[4835]: I0201 07:26:52.829703 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xx49"] Feb 01 07:26:52 crc kubenswrapper[4835]: I0201 07:26:52.830171 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4xx49" podUID="602186bd-e71a-4ce1-ad39-c56495e815c3" containerName="registry-server" containerID="cri-o://9eb022e2135b0596e33429e62d1e55cd8a0be16a9faa993cffd3947dfd050b0a" gracePeriod=30 Feb 01 07:26:52 crc kubenswrapper[4835]: I0201 07:26:52.844291 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s7hk7"] Feb 01 07:26:52 crc kubenswrapper[4835]: I0201 07:26:52.851164 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s7hk7" podUID="2e2bb332-ae2b-4ef7-90b2-79928bf7407b" containerName="registry-server" containerID="cri-o://9cd63e168f5ee1bba32762ea60b5535c14b22b6a31b98e3419ead8dd99d4331a" gracePeriod=30 Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.369625 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.369734 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.407551 4835 generic.go:334] "Generic (PLEG): container finished" podID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" containerID="0eea26ae4bb5a1954f72fbcc75d1e7903480a69577a36065cf6a4254e3efba68" exitCode=0 Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.407657 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zbfbl" event={"ID":"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e","Type":"ContainerDied","Data":"0eea26ae4bb5a1954f72fbcc75d1e7903480a69577a36065cf6a4254e3efba68"} Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.411608 4835 generic.go:334] "Generic (PLEG): container finished" podID="602186bd-e71a-4ce1-ad39-c56495e815c3" containerID="9eb022e2135b0596e33429e62d1e55cd8a0be16a9faa993cffd3947dfd050b0a" exitCode=0 Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.411739 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xx49" event={"ID":"602186bd-e71a-4ce1-ad39-c56495e815c3","Type":"ContainerDied","Data":"9eb022e2135b0596e33429e62d1e55cd8a0be16a9faa993cffd3947dfd050b0a"} Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.415229 4835 generic.go:334] "Generic (PLEG): container finished" podID="835b2622-9047-4e3a-b019-6f15c5fd4566" containerID="e8d75b9cdb3185ff37877ed85d6d3372730274f7dbff223d7ea5c84fe296a601" exitCode=0 Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.415494 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t677t" event={"ID":"835b2622-9047-4e3a-b019-6f15c5fd4566","Type":"ContainerDied","Data":"e8d75b9cdb3185ff37877ed85d6d3372730274f7dbff223d7ea5c84fe296a601"} Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.419005 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.419094 4835 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="4c8e2b44520104ec8ca2ec72d244a8a67a0f39aa65f3b9ab96fedb0af4e6ca17" exitCode=137 Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.419257 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.419506 4835 scope.go:117] "RemoveContainer" containerID="4c8e2b44520104ec8ca2ec72d244a8a67a0f39aa65f3b9ab96fedb0af4e6ca17" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.421721 4835 generic.go:334] "Generic (PLEG): container finished" podID="8615180e-fc31-41b2-ad59-5ae2e48af5a2" containerID="aec701259e552f23dfcf4e9cf051bfbdb52a72d9c0db034b350a2330451e632f" exitCode=0 Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.421816 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" event={"ID":"8615180e-fc31-41b2-ad59-5ae2e48af5a2","Type":"ContainerDied","Data":"aec701259e552f23dfcf4e9cf051bfbdb52a72d9c0db034b350a2330451e632f"} Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.425545 4835 generic.go:334] "Generic (PLEG): container finished" podID="2e2bb332-ae2b-4ef7-90b2-79928bf7407b" containerID="9cd63e168f5ee1bba32762ea60b5535c14b22b6a31b98e3419ead8dd99d4331a" exitCode=0 Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.425589 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7hk7" event={"ID":"2e2bb332-ae2b-4ef7-90b2-79928bf7407b","Type":"ContainerDied","Data":"9cd63e168f5ee1bba32762ea60b5535c14b22b6a31b98e3419ead8dd99d4331a"} Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.450012 4835 scope.go:117] "RemoveContainer" containerID="4c8e2b44520104ec8ca2ec72d244a8a67a0f39aa65f3b9ab96fedb0af4e6ca17" Feb 01 07:26:53 crc kubenswrapper[4835]: E0201 07:26:53.450827 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c8e2b44520104ec8ca2ec72d244a8a67a0f39aa65f3b9ab96fedb0af4e6ca17\": container with ID starting with 4c8e2b44520104ec8ca2ec72d244a8a67a0f39aa65f3b9ab96fedb0af4e6ca17 not found: ID does not exist" containerID="4c8e2b44520104ec8ca2ec72d244a8a67a0f39aa65f3b9ab96fedb0af4e6ca17" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.450894 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c8e2b44520104ec8ca2ec72d244a8a67a0f39aa65f3b9ab96fedb0af4e6ca17"} err="failed to get container status \"4c8e2b44520104ec8ca2ec72d244a8a67a0f39aa65f3b9ab96fedb0af4e6ca17\": rpc error: code = NotFound desc = could not find container \"4c8e2b44520104ec8ca2ec72d244a8a67a0f39aa65f3b9ab96fedb0af4e6ca17\": container with ID starting with 4c8e2b44520104ec8ca2ec72d244a8a67a0f39aa65f3b9ab96fedb0af4e6ca17 not found: ID does not exist" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.510496 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.510573 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.510688 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.510716 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.510745 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.511223 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.511297 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.511324 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.512328 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.519699 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.575382 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.611944 4835 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.611975 4835 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.611985 4835 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.611993 4835 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.612002 4835 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.724677 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t677t" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.778779 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.791695 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.798420 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.849220 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.918111 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcvsr\" (UniqueName: \"kubernetes.io/projected/602186bd-e71a-4ce1-ad39-c56495e815c3-kube-api-access-fcvsr\") pod \"602186bd-e71a-4ce1-ad39-c56495e815c3\" (UID: \"602186bd-e71a-4ce1-ad39-c56495e815c3\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.918163 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh6bn\" (UniqueName: \"kubernetes.io/projected/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-kube-api-access-wh6bn\") pod \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\" (UID: \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.918186 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhft7\" (UniqueName: \"kubernetes.io/projected/8615180e-fc31-41b2-ad59-5ae2e48af5a2-kube-api-access-jhft7\") pod \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\" (UID: \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.918210 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-catalog-content\") pod \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\" (UID: \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.918234 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/835b2622-9047-4e3a-b019-6f15c5fd4566-catalog-content\") pod \"835b2622-9047-4e3a-b019-6f15c5fd4566\" (UID: \"835b2622-9047-4e3a-b019-6f15c5fd4566\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.918253 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-utilities\") pod \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\" (UID: \"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.918269 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602186bd-e71a-4ce1-ad39-c56495e815c3-catalog-content\") pod \"602186bd-e71a-4ce1-ad39-c56495e815c3\" (UID: \"602186bd-e71a-4ce1-ad39-c56495e815c3\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.918300 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8615180e-fc31-41b2-ad59-5ae2e48af5a2-marketplace-trusted-ca\") pod \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\" (UID: \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.918322 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602186bd-e71a-4ce1-ad39-c56495e815c3-utilities\") pod \"602186bd-e71a-4ce1-ad39-c56495e815c3\" (UID: \"602186bd-e71a-4ce1-ad39-c56495e815c3\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.918377 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/835b2622-9047-4e3a-b019-6f15c5fd4566-utilities\") pod \"835b2622-9047-4e3a-b019-6f15c5fd4566\" (UID: \"835b2622-9047-4e3a-b019-6f15c5fd4566\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.918424 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8615180e-fc31-41b2-ad59-5ae2e48af5a2-marketplace-operator-metrics\") pod \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\" (UID: \"8615180e-fc31-41b2-ad59-5ae2e48af5a2\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.918451 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k72t5\" (UniqueName: \"kubernetes.io/projected/835b2622-9047-4e3a-b019-6f15c5fd4566-kube-api-access-k72t5\") pod \"835b2622-9047-4e3a-b019-6f15c5fd4566\" (UID: \"835b2622-9047-4e3a-b019-6f15c5fd4566\") " Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.919776 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/835b2622-9047-4e3a-b019-6f15c5fd4566-utilities" (OuterVolumeSpecName: "utilities") pod "835b2622-9047-4e3a-b019-6f15c5fd4566" (UID: "835b2622-9047-4e3a-b019-6f15c5fd4566"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.919930 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8615180e-fc31-41b2-ad59-5ae2e48af5a2-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "8615180e-fc31-41b2-ad59-5ae2e48af5a2" (UID: "8615180e-fc31-41b2-ad59-5ae2e48af5a2"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.920510 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/602186bd-e71a-4ce1-ad39-c56495e815c3-utilities" (OuterVolumeSpecName: "utilities") pod "602186bd-e71a-4ce1-ad39-c56495e815c3" (UID: "602186bd-e71a-4ce1-ad39-c56495e815c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.921424 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-utilities" (OuterVolumeSpecName: "utilities") pod "7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" (UID: "7a177b30-3240-49d8-b0c5-b74f8e8f4c7e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.923032 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/835b2622-9047-4e3a-b019-6f15c5fd4566-kube-api-access-k72t5" (OuterVolumeSpecName: "kube-api-access-k72t5") pod "835b2622-9047-4e3a-b019-6f15c5fd4566" (UID: "835b2622-9047-4e3a-b019-6f15c5fd4566"). InnerVolumeSpecName "kube-api-access-k72t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.923663 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8615180e-fc31-41b2-ad59-5ae2e48af5a2-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "8615180e-fc31-41b2-ad59-5ae2e48af5a2" (UID: "8615180e-fc31-41b2-ad59-5ae2e48af5a2"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.925497 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8615180e-fc31-41b2-ad59-5ae2e48af5a2-kube-api-access-jhft7" (OuterVolumeSpecName: "kube-api-access-jhft7") pod "8615180e-fc31-41b2-ad59-5ae2e48af5a2" (UID: "8615180e-fc31-41b2-ad59-5ae2e48af5a2"). InnerVolumeSpecName "kube-api-access-jhft7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.929961 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-kube-api-access-wh6bn" (OuterVolumeSpecName: "kube-api-access-wh6bn") pod "7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" (UID: "7a177b30-3240-49d8-b0c5-b74f8e8f4c7e"). InnerVolumeSpecName "kube-api-access-wh6bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.936572 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/602186bd-e71a-4ce1-ad39-c56495e815c3-kube-api-access-fcvsr" (OuterVolumeSpecName: "kube-api-access-fcvsr") pod "602186bd-e71a-4ce1-ad39-c56495e815c3" (UID: "602186bd-e71a-4ce1-ad39-c56495e815c3"). InnerVolumeSpecName "kube-api-access-fcvsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.952222 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/602186bd-e71a-4ce1-ad39-c56495e815c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "602186bd-e71a-4ce1-ad39-c56495e815c3" (UID: "602186bd-e71a-4ce1-ad39-c56495e815c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.981321 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/835b2622-9047-4e3a-b019-6f15c5fd4566-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "835b2622-9047-4e3a-b019-6f15c5fd4566" (UID: "835b2622-9047-4e3a-b019-6f15c5fd4566"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:26:53 crc kubenswrapper[4835]: I0201 07:26:53.992718 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" (UID: "7a177b30-3240-49d8-b0c5-b74f8e8f4c7e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019476 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97wl9\" (UniqueName: \"kubernetes.io/projected/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-kube-api-access-97wl9\") pod \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\" (UID: \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\") " Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019597 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-catalog-content\") pod \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\" (UID: \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\") " Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019623 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-utilities\") pod \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\" (UID: \"2e2bb332-ae2b-4ef7-90b2-79928bf7407b\") " Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019835 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/835b2622-9047-4e3a-b019-6f15c5fd4566-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019852 4835 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8615180e-fc31-41b2-ad59-5ae2e48af5a2-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019864 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k72t5\" (UniqueName: \"kubernetes.io/projected/835b2622-9047-4e3a-b019-6f15c5fd4566-kube-api-access-k72t5\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019873 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcvsr\" (UniqueName: \"kubernetes.io/projected/602186bd-e71a-4ce1-ad39-c56495e815c3-kube-api-access-fcvsr\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019883 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh6bn\" (UniqueName: \"kubernetes.io/projected/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-kube-api-access-wh6bn\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019910 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhft7\" (UniqueName: \"kubernetes.io/projected/8615180e-fc31-41b2-ad59-5ae2e48af5a2-kube-api-access-jhft7\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019919 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019927 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/835b2622-9047-4e3a-b019-6f15c5fd4566-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019934 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019942 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/602186bd-e71a-4ce1-ad39-c56495e815c3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019950 4835 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8615180e-fc31-41b2-ad59-5ae2e48af5a2-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.019958 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/602186bd-e71a-4ce1-ad39-c56495e815c3-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.020708 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-utilities" (OuterVolumeSpecName: "utilities") pod "2e2bb332-ae2b-4ef7-90b2-79928bf7407b" (UID: "2e2bb332-ae2b-4ef7-90b2-79928bf7407b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.024230 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-kube-api-access-97wl9" (OuterVolumeSpecName: "kube-api-access-97wl9") pod "2e2bb332-ae2b-4ef7-90b2-79928bf7407b" (UID: "2e2bb332-ae2b-4ef7-90b2-79928bf7407b"). InnerVolumeSpecName "kube-api-access-97wl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.121459 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.121810 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97wl9\" (UniqueName: \"kubernetes.io/projected/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-kube-api-access-97wl9\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.144250 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e2bb332-ae2b-4ef7-90b2-79928bf7407b" (UID: "2e2bb332-ae2b-4ef7-90b2-79928bf7407b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.223533 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e2bb332-ae2b-4ef7-90b2-79928bf7407b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.435701 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zbfbl" event={"ID":"7a177b30-3240-49d8-b0c5-b74f8e8f4c7e","Type":"ContainerDied","Data":"34d744c0f2118911ec3770b8a37e279293e3d0075191d345f7ef2f24b56383a6"} Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.435737 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zbfbl" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.435777 4835 scope.go:117] "RemoveContainer" containerID="0eea26ae4bb5a1954f72fbcc75d1e7903480a69577a36065cf6a4254e3efba68" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.440536 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xx49" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.440538 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xx49" event={"ID":"602186bd-e71a-4ce1-ad39-c56495e815c3","Type":"ContainerDied","Data":"dea430e052099dd47c2c324f9a18af947b95755e422272ec8bbff41882bef5e5"} Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.444668 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t677t" event={"ID":"835b2622-9047-4e3a-b019-6f15c5fd4566","Type":"ContainerDied","Data":"8633807aa4c1b4534aedf9236769294f25ed6ac597e2c0fda34cf924f7b62039"} Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.444701 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t677t" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.448649 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.448687 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mjg6g" event={"ID":"8615180e-fc31-41b2-ad59-5ae2e48af5a2","Type":"ContainerDied","Data":"756ac183cdf318bae9818cbd3f3e4f67346c6974661fa7194394a92f9755088e"} Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.452696 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7hk7" event={"ID":"2e2bb332-ae2b-4ef7-90b2-79928bf7407b","Type":"ContainerDied","Data":"46b5cafa1f07b5021e9e78fc5e6be54cf12c37d6cc9f28c581409330362b0959"} Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.452888 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7hk7" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.476699 4835 scope.go:117] "RemoveContainer" containerID="7fde970c7809bb8c50b149f97b8907cd34e5ed3f92e53b3f48046bec959d09ef" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.498614 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zbfbl"] Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.503146 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zbfbl"] Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.523969 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t677t"] Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.527171 4835 scope.go:117] "RemoveContainer" containerID="eac60a2bcfc7a27f8cce064694d441e59039265b959d26823af533d85c7dcf10" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.533890 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t677t"] Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.538290 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xx49"] Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.546236 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xx49"] Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.552020 4835 scope.go:117] "RemoveContainer" containerID="9eb022e2135b0596e33429e62d1e55cd8a0be16a9faa993cffd3947dfd050b0a" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.554883 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s7hk7"] Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.559399 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s7hk7"] Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.563085 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mjg6g"] Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.566318 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mjg6g"] Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.575112 4835 scope.go:117] "RemoveContainer" containerID="b1f0e4a7c799308902bb8e0217a0c30fdd02e1a32fd2564302d2a528cea8ba75" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.592897 4835 scope.go:117] "RemoveContainer" containerID="b14cf051de6ab1294efac8b8b8e42b820cf594040b129fc04b183d93a8efbf57" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.607178 4835 scope.go:117] "RemoveContainer" containerID="e8d75b9cdb3185ff37877ed85d6d3372730274f7dbff223d7ea5c84fe296a601" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.632277 4835 scope.go:117] "RemoveContainer" containerID="1b7f8d984d304fa16176f9ff67b5f5c30b1244ad6e8dd4e1ef20f9098a0f7fe2" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.653208 4835 scope.go:117] "RemoveContainer" containerID="7270b81f0145b4123ee2f475f3f90b8aa11e59eef5e948db9ab2c46452e1838a" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.671854 4835 scope.go:117] "RemoveContainer" containerID="aec701259e552f23dfcf4e9cf051bfbdb52a72d9c0db034b350a2330451e632f" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.689648 4835 scope.go:117] "RemoveContainer" containerID="9cd63e168f5ee1bba32762ea60b5535c14b22b6a31b98e3419ead8dd99d4331a" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.704709 4835 scope.go:117] "RemoveContainer" containerID="deccbf5bf47273db8305d287368e84a9555304937b617c52aaad45a3c56162a2" Feb 01 07:26:54 crc kubenswrapper[4835]: I0201 07:26:54.726834 4835 scope.go:117] "RemoveContainer" containerID="d5974ea84742510757e055f310d0049c446f1e2fe023968cfe1b5034d72af99c" Feb 01 07:26:55 crc kubenswrapper[4835]: I0201 07:26:55.578031 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e2bb332-ae2b-4ef7-90b2-79928bf7407b" path="/var/lib/kubelet/pods/2e2bb332-ae2b-4ef7-90b2-79928bf7407b/volumes" Feb 01 07:26:55 crc kubenswrapper[4835]: I0201 07:26:55.580059 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="602186bd-e71a-4ce1-ad39-c56495e815c3" path="/var/lib/kubelet/pods/602186bd-e71a-4ce1-ad39-c56495e815c3/volumes" Feb 01 07:26:55 crc kubenswrapper[4835]: I0201 07:26:55.581915 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" path="/var/lib/kubelet/pods/7a177b30-3240-49d8-b0c5-b74f8e8f4c7e/volumes" Feb 01 07:26:55 crc kubenswrapper[4835]: I0201 07:26:55.584688 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="835b2622-9047-4e3a-b019-6f15c5fd4566" path="/var/lib/kubelet/pods/835b2622-9047-4e3a-b019-6f15c5fd4566/volumes" Feb 01 07:26:55 crc kubenswrapper[4835]: I0201 07:26:55.586355 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8615180e-fc31-41b2-ad59-5ae2e48af5a2" path="/var/lib/kubelet/pods/8615180e-fc31-41b2-ad59-5ae2e48af5a2/volumes" Feb 01 07:27:05 crc kubenswrapper[4835]: I0201 07:27:05.325492 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 01 07:27:06 crc kubenswrapper[4835]: I0201 07:27:06.620542 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 01 07:27:07 crc kubenswrapper[4835]: I0201 07:27:07.334924 4835 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 01 07:27:10 crc kubenswrapper[4835]: I0201 07:27:10.817513 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 01 07:27:13 crc kubenswrapper[4835]: I0201 07:27:13.626029 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.551644 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hpgql"] Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.551849 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" podUID="79f19c84-0217-4b08-8b4d-663096ce67b4" containerName="controller-manager" containerID="cri-o://46bc09af32b8d9716f53039e3e62c795226e8f9e49a4260bebbca463ed20a624" gracePeriod=30 Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.671431 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt"] Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.671721 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" podUID="46f4b60b-0076-4087-b541-4617c3752687" containerName="route-controller-manager" containerID="cri-o://d75057a652ecc6476d8972aeed2313397cacadfb1acde29b6fc5f478793bb81c" gracePeriod=30 Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.887725 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.908226 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-proxy-ca-bundles\") pod \"79f19c84-0217-4b08-8b4d-663096ce67b4\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.908282 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-client-ca\") pod \"79f19c84-0217-4b08-8b4d-663096ce67b4\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.908327 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-config\") pod \"79f19c84-0217-4b08-8b4d-663096ce67b4\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.908355 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttz52\" (UniqueName: \"kubernetes.io/projected/79f19c84-0217-4b08-8b4d-663096ce67b4-kube-api-access-ttz52\") pod \"79f19c84-0217-4b08-8b4d-663096ce67b4\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.908431 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79f19c84-0217-4b08-8b4d-663096ce67b4-serving-cert\") pod \"79f19c84-0217-4b08-8b4d-663096ce67b4\" (UID: \"79f19c84-0217-4b08-8b4d-663096ce67b4\") " Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.909346 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "79f19c84-0217-4b08-8b4d-663096ce67b4" (UID: "79f19c84-0217-4b08-8b4d-663096ce67b4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.909713 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-config" (OuterVolumeSpecName: "config") pod "79f19c84-0217-4b08-8b4d-663096ce67b4" (UID: "79f19c84-0217-4b08-8b4d-663096ce67b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.910080 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-client-ca" (OuterVolumeSpecName: "client-ca") pod "79f19c84-0217-4b08-8b4d-663096ce67b4" (UID: "79f19c84-0217-4b08-8b4d-663096ce67b4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.917189 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79f19c84-0217-4b08-8b4d-663096ce67b4-kube-api-access-ttz52" (OuterVolumeSpecName: "kube-api-access-ttz52") pod "79f19c84-0217-4b08-8b4d-663096ce67b4" (UID: "79f19c84-0217-4b08-8b4d-663096ce67b4"). InnerVolumeSpecName "kube-api-access-ttz52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.919304 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79f19c84-0217-4b08-8b4d-663096ce67b4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "79f19c84-0217-4b08-8b4d-663096ce67b4" (UID: "79f19c84-0217-4b08-8b4d-663096ce67b4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:27:15 crc kubenswrapper[4835]: I0201 07:27:15.965881 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.009356 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46f4b60b-0076-4087-b541-4617c3752687-client-ca\") pod \"46f4b60b-0076-4087-b541-4617c3752687\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.009449 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46f4b60b-0076-4087-b541-4617c3752687-serving-cert\") pod \"46f4b60b-0076-4087-b541-4617c3752687\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.009485 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qckj9\" (UniqueName: \"kubernetes.io/projected/46f4b60b-0076-4087-b541-4617c3752687-kube-api-access-qckj9\") pod \"46f4b60b-0076-4087-b541-4617c3752687\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.009541 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f4b60b-0076-4087-b541-4617c3752687-config\") pod \"46f4b60b-0076-4087-b541-4617c3752687\" (UID: \"46f4b60b-0076-4087-b541-4617c3752687\") " Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.009743 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-client-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.009758 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.009770 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttz52\" (UniqueName: \"kubernetes.io/projected/79f19c84-0217-4b08-8b4d-663096ce67b4-kube-api-access-ttz52\") on node \"crc\" DevicePath \"\"" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.009781 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/79f19c84-0217-4b08-8b4d-663096ce67b4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.009793 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/79f19c84-0217-4b08-8b4d-663096ce67b4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.010687 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46f4b60b-0076-4087-b541-4617c3752687-config" (OuterVolumeSpecName: "config") pod "46f4b60b-0076-4087-b541-4617c3752687" (UID: "46f4b60b-0076-4087-b541-4617c3752687"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.011343 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46f4b60b-0076-4087-b541-4617c3752687-client-ca" (OuterVolumeSpecName: "client-ca") pod "46f4b60b-0076-4087-b541-4617c3752687" (UID: "46f4b60b-0076-4087-b541-4617c3752687"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.014759 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46f4b60b-0076-4087-b541-4617c3752687-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "46f4b60b-0076-4087-b541-4617c3752687" (UID: "46f4b60b-0076-4087-b541-4617c3752687"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.015352 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46f4b60b-0076-4087-b541-4617c3752687-kube-api-access-qckj9" (OuterVolumeSpecName: "kube-api-access-qckj9") pod "46f4b60b-0076-4087-b541-4617c3752687" (UID: "46f4b60b-0076-4087-b541-4617c3752687"). InnerVolumeSpecName "kube-api-access-qckj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.111275 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/46f4b60b-0076-4087-b541-4617c3752687-client-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.111310 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46f4b60b-0076-4087-b541-4617c3752687-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.111320 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qckj9\" (UniqueName: \"kubernetes.io/projected/46f4b60b-0076-4087-b541-4617c3752687-kube-api-access-qckj9\") on node \"crc\" DevicePath \"\"" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.111330 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f4b60b-0076-4087-b541-4617c3752687-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.568803 4835 generic.go:334] "Generic (PLEG): container finished" podID="79f19c84-0217-4b08-8b4d-663096ce67b4" containerID="46bc09af32b8d9716f53039e3e62c795226e8f9e49a4260bebbca463ed20a624" exitCode=0 Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.568873 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" event={"ID":"79f19c84-0217-4b08-8b4d-663096ce67b4","Type":"ContainerDied","Data":"46bc09af32b8d9716f53039e3e62c795226e8f9e49a4260bebbca463ed20a624"} Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.568894 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.568915 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hpgql" event={"ID":"79f19c84-0217-4b08-8b4d-663096ce67b4","Type":"ContainerDied","Data":"88a43a32aeb11a7266228e44e96343168e4ad3f4bf296e26425609793a59a308"} Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.568936 4835 scope.go:117] "RemoveContainer" containerID="46bc09af32b8d9716f53039e3e62c795226e8f9e49a4260bebbca463ed20a624" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.572740 4835 generic.go:334] "Generic (PLEG): container finished" podID="46f4b60b-0076-4087-b541-4617c3752687" containerID="d75057a652ecc6476d8972aeed2313397cacadfb1acde29b6fc5f478793bb81c" exitCode=0 Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.572794 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" event={"ID":"46f4b60b-0076-4087-b541-4617c3752687","Type":"ContainerDied","Data":"d75057a652ecc6476d8972aeed2313397cacadfb1acde29b6fc5f478793bb81c"} Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.572857 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" event={"ID":"46f4b60b-0076-4087-b541-4617c3752687","Type":"ContainerDied","Data":"eaca48a7b94d929256f67ed77a297ce26bfbe10f609a2d3253d4e4ba2b33d879"} Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.572802 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.602369 4835 scope.go:117] "RemoveContainer" containerID="46bc09af32b8d9716f53039e3e62c795226e8f9e49a4260bebbca463ed20a624" Feb 01 07:27:16 crc kubenswrapper[4835]: E0201 07:27:16.602882 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46bc09af32b8d9716f53039e3e62c795226e8f9e49a4260bebbca463ed20a624\": container with ID starting with 46bc09af32b8d9716f53039e3e62c795226e8f9e49a4260bebbca463ed20a624 not found: ID does not exist" containerID="46bc09af32b8d9716f53039e3e62c795226e8f9e49a4260bebbca463ed20a624" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.602925 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46bc09af32b8d9716f53039e3e62c795226e8f9e49a4260bebbca463ed20a624"} err="failed to get container status \"46bc09af32b8d9716f53039e3e62c795226e8f9e49a4260bebbca463ed20a624\": rpc error: code = NotFound desc = could not find container \"46bc09af32b8d9716f53039e3e62c795226e8f9e49a4260bebbca463ed20a624\": container with ID starting with 46bc09af32b8d9716f53039e3e62c795226e8f9e49a4260bebbca463ed20a624 not found: ID does not exist" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.602949 4835 scope.go:117] "RemoveContainer" containerID="d75057a652ecc6476d8972aeed2313397cacadfb1acde29b6fc5f478793bb81c" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.615322 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt"] Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.621574 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2qjjt"] Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.624461 4835 scope.go:117] "RemoveContainer" containerID="d75057a652ecc6476d8972aeed2313397cacadfb1acde29b6fc5f478793bb81c" Feb 01 07:27:16 crc kubenswrapper[4835]: E0201 07:27:16.624941 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d75057a652ecc6476d8972aeed2313397cacadfb1acde29b6fc5f478793bb81c\": container with ID starting with d75057a652ecc6476d8972aeed2313397cacadfb1acde29b6fc5f478793bb81c not found: ID does not exist" containerID="d75057a652ecc6476d8972aeed2313397cacadfb1acde29b6fc5f478793bb81c" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.624976 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d75057a652ecc6476d8972aeed2313397cacadfb1acde29b6fc5f478793bb81c"} err="failed to get container status \"d75057a652ecc6476d8972aeed2313397cacadfb1acde29b6fc5f478793bb81c\": rpc error: code = NotFound desc = could not find container \"d75057a652ecc6476d8972aeed2313397cacadfb1acde29b6fc5f478793bb81c\": container with ID starting with d75057a652ecc6476d8972aeed2313397cacadfb1acde29b6fc5f478793bb81c not found: ID does not exist" Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.626768 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hpgql"] Feb 01 07:27:16 crc kubenswrapper[4835]: I0201 07:27:16.629824 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hpgql"] Feb 01 07:27:17 crc kubenswrapper[4835]: I0201 07:27:17.084928 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 01 07:27:17 crc kubenswrapper[4835]: I0201 07:27:17.590863 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46f4b60b-0076-4087-b541-4617c3752687" path="/var/lib/kubelet/pods/46f4b60b-0076-4087-b541-4617c3752687/volumes" Feb 01 07:27:17 crc kubenswrapper[4835]: I0201 07:27:17.592045 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79f19c84-0217-4b08-8b4d-663096ce67b4" path="/var/lib/kubelet/pods/79f19c84-0217-4b08-8b4d-663096ce67b4/volumes" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.042916 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-www9n"] Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043281 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="835b2622-9047-4e3a-b019-6f15c5fd4566" containerName="registry-server" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043303 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="835b2622-9047-4e3a-b019-6f15c5fd4566" containerName="registry-server" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043328 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602186bd-e71a-4ce1-ad39-c56495e815c3" containerName="extract-utilities" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043340 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="602186bd-e71a-4ce1-ad39-c56495e815c3" containerName="extract-utilities" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043358 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e2bb332-ae2b-4ef7-90b2-79928bf7407b" containerName="extract-utilities" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043370 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e2bb332-ae2b-4ef7-90b2-79928bf7407b" containerName="extract-utilities" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043385 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" containerName="registry-server" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043397 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" containerName="registry-server" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043482 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f4b60b-0076-4087-b541-4617c3752687" containerName="route-controller-manager" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043496 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f4b60b-0076-4087-b541-4617c3752687" containerName="route-controller-manager" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043515 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" containerName="extract-content" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043527 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" containerName="extract-content" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043544 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79f19c84-0217-4b08-8b4d-663096ce67b4" containerName="controller-manager" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043557 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="79f19c84-0217-4b08-8b4d-663096ce67b4" containerName="controller-manager" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043573 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="835b2622-9047-4e3a-b019-6f15c5fd4566" containerName="extract-content" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043585 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="835b2622-9047-4e3a-b019-6f15c5fd4566" containerName="extract-content" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043603 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e2bb332-ae2b-4ef7-90b2-79928bf7407b" containerName="extract-content" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043614 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e2bb332-ae2b-4ef7-90b2-79928bf7407b" containerName="extract-content" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043627 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e2bb332-ae2b-4ef7-90b2-79928bf7407b" containerName="registry-server" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043639 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e2bb332-ae2b-4ef7-90b2-79928bf7407b" containerName="registry-server" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043654 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8615180e-fc31-41b2-ad59-5ae2e48af5a2" containerName="marketplace-operator" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043667 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="8615180e-fc31-41b2-ad59-5ae2e48af5a2" containerName="marketplace-operator" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043682 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602186bd-e71a-4ce1-ad39-c56495e815c3" containerName="extract-content" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043695 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="602186bd-e71a-4ce1-ad39-c56495e815c3" containerName="extract-content" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043713 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602186bd-e71a-4ce1-ad39-c56495e815c3" containerName="registry-server" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043725 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="602186bd-e71a-4ce1-ad39-c56495e815c3" containerName="registry-server" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043742 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043754 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043772 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" containerName="extract-utilities" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043784 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" containerName="extract-utilities" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043797 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" containerName="installer" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043813 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" containerName="installer" Feb 01 07:27:19 crc kubenswrapper[4835]: E0201 07:27:19.043828 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="835b2622-9047-4e3a-b019-6f15c5fd4566" containerName="extract-utilities" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043841 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="835b2622-9047-4e3a-b019-6f15c5fd4566" containerName="extract-utilities" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.043993 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f4b60b-0076-4087-b541-4617c3752687" containerName="route-controller-manager" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.044015 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e2bb332-ae2b-4ef7-90b2-79928bf7407b" containerName="registry-server" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.044033 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a177b30-3240-49d8-b0c5-b74f8e8f4c7e" containerName="registry-server" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.044054 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.044072 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="602186bd-e71a-4ce1-ad39-c56495e815c3" containerName="registry-server" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.044090 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9b454b8-f758-43d4-bd2b-93ebc807e06e" containerName="installer" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.044102 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="835b2622-9047-4e3a-b019-6f15c5fd4566" containerName="registry-server" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.044117 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="79f19c84-0217-4b08-8b4d-663096ce67b4" containerName="controller-manager" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.044131 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="8615180e-fc31-41b2-ad59-5ae2e48af5a2" containerName="marketplace-operator" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.044852 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-www9n" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.047462 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.049149 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx"] Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.049610 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.050128 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.051018 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.053857 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.053949 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.053861 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.055777 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6dcg\" (UniqueName: \"kubernetes.io/projected/fb5f0b62-9cf6-4533-a1cb-d29f55a41ace-kube-api-access-s6dcg\") pod \"route-controller-manager-7b7594c6d4-8jhcx\" (UID: \"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace\") " pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.055858 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2481990-b703-4792-b5b0-549daf22e66a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-www9n\" (UID: \"c2481990-b703-4792-b5b0-549daf22e66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-www9n" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.055933 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq2p6\" (UniqueName: \"kubernetes.io/projected/c2481990-b703-4792-b5b0-549daf22e66a-kube-api-access-gq2p6\") pod \"marketplace-operator-79b997595-www9n\" (UID: \"c2481990-b703-4792-b5b0-549daf22e66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-www9n" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.056024 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb5f0b62-9cf6-4533-a1cb-d29f55a41ace-client-ca\") pod \"route-controller-manager-7b7594c6d4-8jhcx\" (UID: \"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace\") " pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.056083 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c2481990-b703-4792-b5b0-549daf22e66a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-www9n\" (UID: \"c2481990-b703-4792-b5b0-549daf22e66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-www9n" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.056143 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb5f0b62-9cf6-4533-a1cb-d29f55a41ace-config\") pod \"route-controller-manager-7b7594c6d4-8jhcx\" (UID: \"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace\") " pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.056227 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb5f0b62-9cf6-4533-a1cb-d29f55a41ace-serving-cert\") pod \"route-controller-manager-7b7594c6d4-8jhcx\" (UID: \"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace\") " pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.057063 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.057400 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.057405 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.057757 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.088248 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.157820 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6dcg\" (UniqueName: \"kubernetes.io/projected/fb5f0b62-9cf6-4533-a1cb-d29f55a41ace-kube-api-access-s6dcg\") pod \"route-controller-manager-7b7594c6d4-8jhcx\" (UID: \"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace\") " pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.157883 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2481990-b703-4792-b5b0-549daf22e66a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-www9n\" (UID: \"c2481990-b703-4792-b5b0-549daf22e66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-www9n" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.157915 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq2p6\" (UniqueName: \"kubernetes.io/projected/c2481990-b703-4792-b5b0-549daf22e66a-kube-api-access-gq2p6\") pod \"marketplace-operator-79b997595-www9n\" (UID: \"c2481990-b703-4792-b5b0-549daf22e66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-www9n" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.157965 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c2481990-b703-4792-b5b0-549daf22e66a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-www9n\" (UID: \"c2481990-b703-4792-b5b0-549daf22e66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-www9n" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.157999 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb5f0b62-9cf6-4533-a1cb-d29f55a41ace-client-ca\") pod \"route-controller-manager-7b7594c6d4-8jhcx\" (UID: \"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace\") " pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.158035 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb5f0b62-9cf6-4533-a1cb-d29f55a41ace-config\") pod \"route-controller-manager-7b7594c6d4-8jhcx\" (UID: \"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace\") " pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.158087 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb5f0b62-9cf6-4533-a1cb-d29f55a41ace-serving-cert\") pod \"route-controller-manager-7b7594c6d4-8jhcx\" (UID: \"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace\") " pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.159547 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2481990-b703-4792-b5b0-549daf22e66a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-www9n\" (UID: \"c2481990-b703-4792-b5b0-549daf22e66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-www9n" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.159969 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb5f0b62-9cf6-4533-a1cb-d29f55a41ace-client-ca\") pod \"route-controller-manager-7b7594c6d4-8jhcx\" (UID: \"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace\") " pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.161739 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb5f0b62-9cf6-4533-a1cb-d29f55a41ace-config\") pod \"route-controller-manager-7b7594c6d4-8jhcx\" (UID: \"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace\") " pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.163841 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb5f0b62-9cf6-4533-a1cb-d29f55a41ace-serving-cert\") pod \"route-controller-manager-7b7594c6d4-8jhcx\" (UID: \"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace\") " pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.163920 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c2481990-b703-4792-b5b0-549daf22e66a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-www9n\" (UID: \"c2481990-b703-4792-b5b0-549daf22e66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-www9n" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.181271 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq2p6\" (UniqueName: \"kubernetes.io/projected/c2481990-b703-4792-b5b0-549daf22e66a-kube-api-access-gq2p6\") pod \"marketplace-operator-79b997595-www9n\" (UID: \"c2481990-b703-4792-b5b0-549daf22e66a\") " pod="openshift-marketplace/marketplace-operator-79b997595-www9n" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.199752 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6dcg\" (UniqueName: \"kubernetes.io/projected/fb5f0b62-9cf6-4533-a1cb-d29f55a41ace-kube-api-access-s6dcg\") pod \"route-controller-manager-7b7594c6d4-8jhcx\" (UID: \"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace\") " pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.374005 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-www9n" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.385624 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:19 crc kubenswrapper[4835]: I0201 07:27:19.488576 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.072372 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-84fd975466-sxqz2"] Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.073747 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.076392 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.078040 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.078321 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.078485 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.078495 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.078573 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.098345 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.171886 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-client-ca\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.171951 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-proxy-ca-bundles\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.171985 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10fad4bf-1fb3-4455-a349-fefb7f585c30-serving-cert\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.172226 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t58dp\" (UniqueName: \"kubernetes.io/projected/10fad4bf-1fb3-4455-a349-fefb7f585c30-kube-api-access-t58dp\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.172314 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-config\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.273208 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-config\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.273351 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-client-ca\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.273395 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-proxy-ca-bundles\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.273451 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10fad4bf-1fb3-4455-a349-fefb7f585c30-serving-cert\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.273485 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t58dp\" (UniqueName: \"kubernetes.io/projected/10fad4bf-1fb3-4455-a349-fefb7f585c30-kube-api-access-t58dp\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.275305 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-client-ca\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.275588 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-proxy-ca-bundles\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.275815 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-config\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.291533 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10fad4bf-1fb3-4455-a349-fefb7f585c30-serving-cert\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.292693 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t58dp\" (UniqueName: \"kubernetes.io/projected/10fad4bf-1fb3-4455-a349-fefb7f585c30-kube-api-access-t58dp\") pod \"controller-manager-84fd975466-sxqz2\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.395944 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.536337 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.700281 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wqgsq"] Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.702234 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.704708 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.781174 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cb5bbc9-0e87-45ed-897f-6e343be075d5-catalog-content\") pod \"certified-operators-wqgsq\" (UID: \"5cb5bbc9-0e87-45ed-897f-6e343be075d5\") " pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.781235 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cb5bbc9-0e87-45ed-897f-6e343be075d5-utilities\") pod \"certified-operators-wqgsq\" (UID: \"5cb5bbc9-0e87-45ed-897f-6e343be075d5\") " pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.781295 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqnvm\" (UniqueName: \"kubernetes.io/projected/5cb5bbc9-0e87-45ed-897f-6e343be075d5-kube-api-access-nqnvm\") pod \"certified-operators-wqgsq\" (UID: \"5cb5bbc9-0e87-45ed-897f-6e343be075d5\") " pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.882780 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cb5bbc9-0e87-45ed-897f-6e343be075d5-catalog-content\") pod \"certified-operators-wqgsq\" (UID: \"5cb5bbc9-0e87-45ed-897f-6e343be075d5\") " pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.882860 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cb5bbc9-0e87-45ed-897f-6e343be075d5-utilities\") pod \"certified-operators-wqgsq\" (UID: \"5cb5bbc9-0e87-45ed-897f-6e343be075d5\") " pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.882956 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqnvm\" (UniqueName: \"kubernetes.io/projected/5cb5bbc9-0e87-45ed-897f-6e343be075d5-kube-api-access-nqnvm\") pod \"certified-operators-wqgsq\" (UID: \"5cb5bbc9-0e87-45ed-897f-6e343be075d5\") " pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.883803 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cb5bbc9-0e87-45ed-897f-6e343be075d5-catalog-content\") pod \"certified-operators-wqgsq\" (UID: \"5cb5bbc9-0e87-45ed-897f-6e343be075d5\") " pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.883959 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cb5bbc9-0e87-45ed-897f-6e343be075d5-utilities\") pod \"certified-operators-wqgsq\" (UID: \"5cb5bbc9-0e87-45ed-897f-6e343be075d5\") " pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:20 crc kubenswrapper[4835]: I0201 07:27:20.914359 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqnvm\" (UniqueName: \"kubernetes.io/projected/5cb5bbc9-0e87-45ed-897f-6e343be075d5-kube-api-access-nqnvm\") pod \"certified-operators-wqgsq\" (UID: \"5cb5bbc9-0e87-45ed-897f-6e343be075d5\") " pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.031942 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.088360 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5blqv"] Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.090064 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.098533 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.186311 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48972eb7-80de-4d1a-b9c1-adf412bd3531-utilities\") pod \"community-operators-5blqv\" (UID: \"48972eb7-80de-4d1a-b9c1-adf412bd3531\") " pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.186446 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62w42\" (UniqueName: \"kubernetes.io/projected/48972eb7-80de-4d1a-b9c1-adf412bd3531-kube-api-access-62w42\") pod \"community-operators-5blqv\" (UID: \"48972eb7-80de-4d1a-b9c1-adf412bd3531\") " pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.186498 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48972eb7-80de-4d1a-b9c1-adf412bd3531-catalog-content\") pod \"community-operators-5blqv\" (UID: \"48972eb7-80de-4d1a-b9c1-adf412bd3531\") " pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.287383 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48972eb7-80de-4d1a-b9c1-adf412bd3531-catalog-content\") pod \"community-operators-5blqv\" (UID: \"48972eb7-80de-4d1a-b9c1-adf412bd3531\") " pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.287822 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48972eb7-80de-4d1a-b9c1-adf412bd3531-utilities\") pod \"community-operators-5blqv\" (UID: \"48972eb7-80de-4d1a-b9c1-adf412bd3531\") " pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.287982 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62w42\" (UniqueName: \"kubernetes.io/projected/48972eb7-80de-4d1a-b9c1-adf412bd3531-kube-api-access-62w42\") pod \"community-operators-5blqv\" (UID: \"48972eb7-80de-4d1a-b9c1-adf412bd3531\") " pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.288248 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48972eb7-80de-4d1a-b9c1-adf412bd3531-catalog-content\") pod \"community-operators-5blqv\" (UID: \"48972eb7-80de-4d1a-b9c1-adf412bd3531\") " pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.288811 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48972eb7-80de-4d1a-b9c1-adf412bd3531-utilities\") pod \"community-operators-5blqv\" (UID: \"48972eb7-80de-4d1a-b9c1-adf412bd3531\") " pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.314059 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62w42\" (UniqueName: \"kubernetes.io/projected/48972eb7-80de-4d1a-b9c1-adf412bd3531-kube-api-access-62w42\") pod \"community-operators-5blqv\" (UID: \"48972eb7-80de-4d1a-b9c1-adf412bd3531\") " pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.414164 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.747785 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84fd975466-sxqz2"] Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.762761 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wqgsq"] Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.778011 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx"] Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.785183 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5blqv"] Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.791476 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-www9n"] Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.829019 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 01 07:27:21 crc kubenswrapper[4835]: I0201 07:27:21.835649 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.094754 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx"] Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.113094 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-www9n"] Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.359076 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wqgsq"] Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.362428 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-84fd975466-sxqz2"] Feb 01 07:27:22 crc kubenswrapper[4835]: W0201 07:27:22.365836 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cb5bbc9_0e87_45ed_897f_6e343be075d5.slice/crio-67b09ed4b7203d63d56d1fda7350e7801f601d8f630976f5ef65ea2f803d3d7b WatchSource:0}: Error finding container 67b09ed4b7203d63d56d1fda7350e7801f601d8f630976f5ef65ea2f803d3d7b: Status 404 returned error can't find the container with id 67b09ed4b7203d63d56d1fda7350e7801f601d8f630976f5ef65ea2f803d3d7b Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.365838 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5blqv"] Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.614322 4835 generic.go:334] "Generic (PLEG): container finished" podID="48972eb7-80de-4d1a-b9c1-adf412bd3531" containerID="d5e2f5d1534650a4cf1433bf132faf98e02e52decf048ace44fbb7b0f61e32fe" exitCode=0 Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.614624 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5blqv" event={"ID":"48972eb7-80de-4d1a-b9c1-adf412bd3531","Type":"ContainerDied","Data":"d5e2f5d1534650a4cf1433bf132faf98e02e52decf048ace44fbb7b0f61e32fe"} Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.614649 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5blqv" event={"ID":"48972eb7-80de-4d1a-b9c1-adf412bd3531","Type":"ContainerStarted","Data":"a97613cab5446cbb6022f66ef99ec2081a9134140b311ba32389e80a2e221cbc"} Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.618050 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" event={"ID":"10fad4bf-1fb3-4455-a349-fefb7f585c30","Type":"ContainerStarted","Data":"6c09a7ee33cfbd1024e1b7e694abb6d0b5c45595282350e494516e60f2433aba"} Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.618103 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" event={"ID":"10fad4bf-1fb3-4455-a349-fefb7f585c30","Type":"ContainerStarted","Data":"6f7177eac40a95ebcf05587a253beb2d53eb30227ac5417fca9d9b44c2b17f2d"} Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.618501 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.619765 4835 patch_prober.go:28] interesting pod/controller-manager-84fd975466-sxqz2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.619803 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" podUID="10fad4bf-1fb3-4455-a349-fefb7f585c30" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.619976 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" event={"ID":"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace","Type":"ContainerStarted","Data":"0bfcdb682af11bf82610dd655dbb0dbb6c99ca021d689f94b4283cc1ec45a205"} Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.619998 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" event={"ID":"fb5f0b62-9cf6-4533-a1cb-d29f55a41ace","Type":"ContainerStarted","Data":"214fe92092a097c47af795f796c7ad8ad3e488a2461c92c690c4b5b33c211332"} Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.620573 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.624372 4835 generic.go:334] "Generic (PLEG): container finished" podID="5cb5bbc9-0e87-45ed-897f-6e343be075d5" containerID="b848e8c631552b1f89d6940b9b5fb1525c09a3eff4768433cb972cfe507ad540" exitCode=0 Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.624434 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqgsq" event={"ID":"5cb5bbc9-0e87-45ed-897f-6e343be075d5","Type":"ContainerDied","Data":"b848e8c631552b1f89d6940b9b5fb1525c09a3eff4768433cb972cfe507ad540"} Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.624470 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqgsq" event={"ID":"5cb5bbc9-0e87-45ed-897f-6e343be075d5","Type":"ContainerStarted","Data":"67b09ed4b7203d63d56d1fda7350e7801f601d8f630976f5ef65ea2f803d3d7b"} Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.626214 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-www9n" event={"ID":"c2481990-b703-4792-b5b0-549daf22e66a","Type":"ContainerStarted","Data":"55d8a61bd08b899d611ab4873dc003c8f3aa530c72b7031f33331e3ae5509f09"} Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.626298 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-www9n" event={"ID":"c2481990-b703-4792-b5b0-549daf22e66a","Type":"ContainerStarted","Data":"fa4397b21d379cbe699a0409b625a1bc94cda8f719e36a18f2b4f429654336a9"} Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.627183 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-www9n" Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.629563 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-www9n" Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.651300 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" podStartSLOduration=7.651280775 podStartE2EDuration="7.651280775s" podCreationTimestamp="2026-02-01 07:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:27:22.648817759 +0000 UTC m=+315.769254193" watchObservedRunningTime="2026-02-01 07:27:22.651280775 +0000 UTC m=+315.771717209" Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.669781 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-www9n" podStartSLOduration=30.669761172 podStartE2EDuration="30.669761172s" podCreationTimestamp="2026-02-01 07:26:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:27:22.666860224 +0000 UTC m=+315.787296658" watchObservedRunningTime="2026-02-01 07:27:22.669761172 +0000 UTC m=+315.790197626" Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.699511 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" podStartSLOduration=7.69947409 podStartE2EDuration="7.69947409s" podCreationTimestamp="2026-02-01 07:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:27:22.697068016 +0000 UTC m=+315.817504450" watchObservedRunningTime="2026-02-01 07:27:22.69947409 +0000 UTC m=+315.819910534" Feb 01 07:27:22 crc kubenswrapper[4835]: I0201 07:27:22.816319 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b7594c6d4-8jhcx" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.301293 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ghmxq"] Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.302977 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.306086 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ghmxq"] Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.306186 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.417268 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0155c2ce-1bd0-424d-931f-132c22e7a42e-utilities\") pod \"redhat-marketplace-ghmxq\" (UID: \"0155c2ce-1bd0-424d-931f-132c22e7a42e\") " pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.417332 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlpmn\" (UniqueName: \"kubernetes.io/projected/0155c2ce-1bd0-424d-931f-132c22e7a42e-kube-api-access-dlpmn\") pod \"redhat-marketplace-ghmxq\" (UID: \"0155c2ce-1bd0-424d-931f-132c22e7a42e\") " pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.417471 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0155c2ce-1bd0-424d-931f-132c22e7a42e-catalog-content\") pod \"redhat-marketplace-ghmxq\" (UID: \"0155c2ce-1bd0-424d-931f-132c22e7a42e\") " pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.486093 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-75mhs"] Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.488194 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.493211 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-75mhs"] Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.493850 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.518545 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlpmn\" (UniqueName: \"kubernetes.io/projected/0155c2ce-1bd0-424d-931f-132c22e7a42e-kube-api-access-dlpmn\") pod \"redhat-marketplace-ghmxq\" (UID: \"0155c2ce-1bd0-424d-931f-132c22e7a42e\") " pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.518638 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0155c2ce-1bd0-424d-931f-132c22e7a42e-catalog-content\") pod \"redhat-marketplace-ghmxq\" (UID: \"0155c2ce-1bd0-424d-931f-132c22e7a42e\") " pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.518658 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0155c2ce-1bd0-424d-931f-132c22e7a42e-utilities\") pod \"redhat-marketplace-ghmxq\" (UID: \"0155c2ce-1bd0-424d-931f-132c22e7a42e\") " pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.519127 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0155c2ce-1bd0-424d-931f-132c22e7a42e-utilities\") pod \"redhat-marketplace-ghmxq\" (UID: \"0155c2ce-1bd0-424d-931f-132c22e7a42e\") " pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.519629 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0155c2ce-1bd0-424d-931f-132c22e7a42e-catalog-content\") pod \"redhat-marketplace-ghmxq\" (UID: \"0155c2ce-1bd0-424d-931f-132c22e7a42e\") " pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.545519 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlpmn\" (UniqueName: \"kubernetes.io/projected/0155c2ce-1bd0-424d-931f-132c22e7a42e-kube-api-access-dlpmn\") pod \"redhat-marketplace-ghmxq\" (UID: \"0155c2ce-1bd0-424d-931f-132c22e7a42e\") " pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.606736 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.620047 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fead728-7b7f-4ee9-b01e-455d536a88c5-catalog-content\") pod \"redhat-operators-75mhs\" (UID: \"5fead728-7b7f-4ee9-b01e-455d536a88c5\") " pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.620144 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fead728-7b7f-4ee9-b01e-455d536a88c5-utilities\") pod \"redhat-operators-75mhs\" (UID: \"5fead728-7b7f-4ee9-b01e-455d536a88c5\") " pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.620192 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl6bf\" (UniqueName: \"kubernetes.io/projected/5fead728-7b7f-4ee9-b01e-455d536a88c5-kube-api-access-wl6bf\") pod \"redhat-operators-75mhs\" (UID: \"5fead728-7b7f-4ee9-b01e-455d536a88c5\") " pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.640602 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.640634 4835 generic.go:334] "Generic (PLEG): container finished" podID="48972eb7-80de-4d1a-b9c1-adf412bd3531" containerID="00639fbfdc8c05a878182afacfc54aac4d6d97d80b8d202f1d59fcc0b702129d" exitCode=0 Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.640951 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5blqv" event={"ID":"48972eb7-80de-4d1a-b9c1-adf412bd3531","Type":"ContainerDied","Data":"00639fbfdc8c05a878182afacfc54aac4d6d97d80b8d202f1d59fcc0b702129d"} Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.644753 4835 generic.go:334] "Generic (PLEG): container finished" podID="5cb5bbc9-0e87-45ed-897f-6e343be075d5" containerID="c0aa03511aa1ec11ff61e924a59a70fe4a8671768f13fe80bccd019c9f867dfe" exitCode=0 Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.645659 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqgsq" event={"ID":"5cb5bbc9-0e87-45ed-897f-6e343be075d5","Type":"ContainerDied","Data":"c0aa03511aa1ec11ff61e924a59a70fe4a8671768f13fe80bccd019c9f867dfe"} Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.658040 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.722467 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fead728-7b7f-4ee9-b01e-455d536a88c5-utilities\") pod \"redhat-operators-75mhs\" (UID: \"5fead728-7b7f-4ee9-b01e-455d536a88c5\") " pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.722934 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fead728-7b7f-4ee9-b01e-455d536a88c5-utilities\") pod \"redhat-operators-75mhs\" (UID: \"5fead728-7b7f-4ee9-b01e-455d536a88c5\") " pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.723067 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl6bf\" (UniqueName: \"kubernetes.io/projected/5fead728-7b7f-4ee9-b01e-455d536a88c5-kube-api-access-wl6bf\") pod \"redhat-operators-75mhs\" (UID: \"5fead728-7b7f-4ee9-b01e-455d536a88c5\") " pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.723234 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fead728-7b7f-4ee9-b01e-455d536a88c5-catalog-content\") pod \"redhat-operators-75mhs\" (UID: \"5fead728-7b7f-4ee9-b01e-455d536a88c5\") " pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.723673 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fead728-7b7f-4ee9-b01e-455d536a88c5-catalog-content\") pod \"redhat-operators-75mhs\" (UID: \"5fead728-7b7f-4ee9-b01e-455d536a88c5\") " pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.761381 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl6bf\" (UniqueName: \"kubernetes.io/projected/5fead728-7b7f-4ee9-b01e-455d536a88c5-kube-api-access-wl6bf\") pod \"redhat-operators-75mhs\" (UID: \"5fead728-7b7f-4ee9-b01e-455d536a88c5\") " pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.810028 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:23 crc kubenswrapper[4835]: I0201 07:27:23.915502 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ghmxq"] Feb 01 07:27:23 crc kubenswrapper[4835]: W0201 07:27:23.923369 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0155c2ce_1bd0_424d_931f_132c22e7a42e.slice/crio-f845d2547c4126eb18b128ebeeafa39644f96775618df5688ad77e4e3f29c39d WatchSource:0}: Error finding container f845d2547c4126eb18b128ebeeafa39644f96775618df5688ad77e4e3f29c39d: Status 404 returned error can't find the container with id f845d2547c4126eb18b128ebeeafa39644f96775618df5688ad77e4e3f29c39d Feb 01 07:27:24 crc kubenswrapper[4835]: I0201 07:27:24.027424 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-75mhs"] Feb 01 07:27:24 crc kubenswrapper[4835]: W0201 07:27:24.036209 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fead728_7b7f_4ee9_b01e_455d536a88c5.slice/crio-62f4b19fdb4c0830e59e7999d04de98106aa9561055a94a7befd8ae86378d63b WatchSource:0}: Error finding container 62f4b19fdb4c0830e59e7999d04de98106aa9561055a94a7befd8ae86378d63b: Status 404 returned error can't find the container with id 62f4b19fdb4c0830e59e7999d04de98106aa9561055a94a7befd8ae86378d63b Feb 01 07:27:24 crc kubenswrapper[4835]: I0201 07:27:24.656426 4835 generic.go:334] "Generic (PLEG): container finished" podID="0155c2ce-1bd0-424d-931f-132c22e7a42e" containerID="83f242ee4bd070b393af829ec7bc10d6cbec9cfb20d3c5696c271f8ab3b1cf03" exitCode=0 Feb 01 07:27:24 crc kubenswrapper[4835]: I0201 07:27:24.656495 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghmxq" event={"ID":"0155c2ce-1bd0-424d-931f-132c22e7a42e","Type":"ContainerDied","Data":"83f242ee4bd070b393af829ec7bc10d6cbec9cfb20d3c5696c271f8ab3b1cf03"} Feb 01 07:27:24 crc kubenswrapper[4835]: I0201 07:27:24.656524 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghmxq" event={"ID":"0155c2ce-1bd0-424d-931f-132c22e7a42e","Type":"ContainerStarted","Data":"f845d2547c4126eb18b128ebeeafa39644f96775618df5688ad77e4e3f29c39d"} Feb 01 07:27:24 crc kubenswrapper[4835]: I0201 07:27:24.659149 4835 generic.go:334] "Generic (PLEG): container finished" podID="5fead728-7b7f-4ee9-b01e-455d536a88c5" containerID="86d803fc4c63848e36bb959eaf3a1fce37d7cbdddaa9fbcb8d7849cca6cbdf42" exitCode=0 Feb 01 07:27:24 crc kubenswrapper[4835]: I0201 07:27:24.659353 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75mhs" event={"ID":"5fead728-7b7f-4ee9-b01e-455d536a88c5","Type":"ContainerDied","Data":"86d803fc4c63848e36bb959eaf3a1fce37d7cbdddaa9fbcb8d7849cca6cbdf42"} Feb 01 07:27:24 crc kubenswrapper[4835]: I0201 07:27:24.659382 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75mhs" event={"ID":"5fead728-7b7f-4ee9-b01e-455d536a88c5","Type":"ContainerStarted","Data":"62f4b19fdb4c0830e59e7999d04de98106aa9561055a94a7befd8ae86378d63b"} Feb 01 07:27:24 crc kubenswrapper[4835]: I0201 07:27:24.664384 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5blqv" event={"ID":"48972eb7-80de-4d1a-b9c1-adf412bd3531","Type":"ContainerStarted","Data":"4b4accff2f1a20d0e288fd1c22d16a0996201d0dc3273c256de8cfeb83f7a5c2"} Feb 01 07:27:24 crc kubenswrapper[4835]: I0201 07:27:24.667201 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wqgsq" event={"ID":"5cb5bbc9-0e87-45ed-897f-6e343be075d5","Type":"ContainerStarted","Data":"c119a3eab261e08dc8aeb835a550bc135c657074ffe17856b559cc9a58f6f021"} Feb 01 07:27:24 crc kubenswrapper[4835]: I0201 07:27:24.696713 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5blqv" podStartSLOduration=2.197844143 podStartE2EDuration="3.69669841s" podCreationTimestamp="2026-02-01 07:27:21 +0000 UTC" firstStartedPulling="2026-02-01 07:27:22.615664648 +0000 UTC m=+315.736101082" lastFinishedPulling="2026-02-01 07:27:24.114518915 +0000 UTC m=+317.234955349" observedRunningTime="2026-02-01 07:27:24.694247044 +0000 UTC m=+317.814683498" watchObservedRunningTime="2026-02-01 07:27:24.69669841 +0000 UTC m=+317.817134844" Feb 01 07:27:24 crc kubenswrapper[4835]: I0201 07:27:24.759350 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wqgsq" podStartSLOduration=3.310643734 podStartE2EDuration="4.759332383s" podCreationTimestamp="2026-02-01 07:27:20 +0000 UTC" firstStartedPulling="2026-02-01 07:27:22.625711978 +0000 UTC m=+315.746148412" lastFinishedPulling="2026-02-01 07:27:24.074400627 +0000 UTC m=+317.194837061" observedRunningTime="2026-02-01 07:27:24.756710932 +0000 UTC m=+317.877147376" watchObservedRunningTime="2026-02-01 07:27:24.759332383 +0000 UTC m=+317.879768817" Feb 01 07:27:25 crc kubenswrapper[4835]: I0201 07:27:25.673590 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75mhs" event={"ID":"5fead728-7b7f-4ee9-b01e-455d536a88c5","Type":"ContainerStarted","Data":"d2483b22d57573b85ac190f9ef9e6d4d021206c16a4b9a259ca21ce3bc676263"} Feb 01 07:27:25 crc kubenswrapper[4835]: I0201 07:27:25.675210 4835 generic.go:334] "Generic (PLEG): container finished" podID="0155c2ce-1bd0-424d-931f-132c22e7a42e" containerID="a0f3a8b184b1495ee75f611ba885b2af17d82d78a688a542ffbc9c5ecdd9a195" exitCode=0 Feb 01 07:27:25 crc kubenswrapper[4835]: I0201 07:27:25.675258 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghmxq" event={"ID":"0155c2ce-1bd0-424d-931f-132c22e7a42e","Type":"ContainerDied","Data":"a0f3a8b184b1495ee75f611ba885b2af17d82d78a688a542ffbc9c5ecdd9a195"} Feb 01 07:27:26 crc kubenswrapper[4835]: I0201 07:27:26.687535 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghmxq" event={"ID":"0155c2ce-1bd0-424d-931f-132c22e7a42e","Type":"ContainerStarted","Data":"48a2b36849fc2c37227697cfeb9e8dbaba66864ba3562dfccc288b1f01746ed4"} Feb 01 07:27:26 crc kubenswrapper[4835]: I0201 07:27:26.692171 4835 generic.go:334] "Generic (PLEG): container finished" podID="5fead728-7b7f-4ee9-b01e-455d536a88c5" containerID="d2483b22d57573b85ac190f9ef9e6d4d021206c16a4b9a259ca21ce3bc676263" exitCode=0 Feb 01 07:27:26 crc kubenswrapper[4835]: I0201 07:27:26.692230 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75mhs" event={"ID":"5fead728-7b7f-4ee9-b01e-455d536a88c5","Type":"ContainerDied","Data":"d2483b22d57573b85ac190f9ef9e6d4d021206c16a4b9a259ca21ce3bc676263"} Feb 01 07:27:26 crc kubenswrapper[4835]: I0201 07:27:26.716010 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ghmxq" podStartSLOduration=2.273221061 podStartE2EDuration="3.715992072s" podCreationTimestamp="2026-02-01 07:27:23 +0000 UTC" firstStartedPulling="2026-02-01 07:27:24.658199815 +0000 UTC m=+317.778636259" lastFinishedPulling="2026-02-01 07:27:26.100970796 +0000 UTC m=+319.221407270" observedRunningTime="2026-02-01 07:27:26.71591708 +0000 UTC m=+319.836353514" watchObservedRunningTime="2026-02-01 07:27:26.715992072 +0000 UTC m=+319.836428516" Feb 01 07:27:26 crc kubenswrapper[4835]: I0201 07:27:26.748184 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 01 07:27:27 crc kubenswrapper[4835]: I0201 07:27:27.700321 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-75mhs" event={"ID":"5fead728-7b7f-4ee9-b01e-455d536a88c5","Type":"ContainerStarted","Data":"5b87fd0df43de20c1c8f6d921d84e6080d54e5d2cadfd41bc826c9b5485e7b95"} Feb 01 07:27:27 crc kubenswrapper[4835]: I0201 07:27:27.719847 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-75mhs" podStartSLOduration=2.268787224 podStartE2EDuration="4.719834508s" podCreationTimestamp="2026-02-01 07:27:23 +0000 UTC" firstStartedPulling="2026-02-01 07:27:24.66059634 +0000 UTC m=+317.781032784" lastFinishedPulling="2026-02-01 07:27:27.111643594 +0000 UTC m=+320.232080068" observedRunningTime="2026-02-01 07:27:27.717780372 +0000 UTC m=+320.838216806" watchObservedRunningTime="2026-02-01 07:27:27.719834508 +0000 UTC m=+320.840270942" Feb 01 07:27:28 crc kubenswrapper[4835]: I0201 07:27:28.205689 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 01 07:27:31 crc kubenswrapper[4835]: I0201 07:27:31.032439 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:31 crc kubenswrapper[4835]: I0201 07:27:31.034362 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:31 crc kubenswrapper[4835]: I0201 07:27:31.103862 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:31 crc kubenswrapper[4835]: I0201 07:27:31.414595 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:31 crc kubenswrapper[4835]: I0201 07:27:31.414802 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:31 crc kubenswrapper[4835]: I0201 07:27:31.486686 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:31 crc kubenswrapper[4835]: I0201 07:27:31.794878 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wqgsq" Feb 01 07:27:31 crc kubenswrapper[4835]: I0201 07:27:31.800646 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:27:33 crc kubenswrapper[4835]: I0201 07:27:33.641274 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:33 crc kubenswrapper[4835]: I0201 07:27:33.641699 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:33 crc kubenswrapper[4835]: I0201 07:27:33.708466 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:33 crc kubenswrapper[4835]: I0201 07:27:33.803213 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ghmxq" Feb 01 07:27:33 crc kubenswrapper[4835]: I0201 07:27:33.810614 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:33 crc kubenswrapper[4835]: I0201 07:27:33.811729 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:34 crc kubenswrapper[4835]: I0201 07:27:34.859122 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-75mhs" podUID="5fead728-7b7f-4ee9-b01e-455d536a88c5" containerName="registry-server" probeResult="failure" output=< Feb 01 07:27:34 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Feb 01 07:27:34 crc kubenswrapper[4835]: > Feb 01 07:27:43 crc kubenswrapper[4835]: I0201 07:27:43.879217 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:43 crc kubenswrapper[4835]: I0201 07:27:43.953084 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-75mhs" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.415713 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vf2w6"] Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.416824 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.435386 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vf2w6"] Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.512776 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-bound-sa-token\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.512863 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.512912 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrhwr\" (UniqueName: \"kubernetes.io/projected/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-kube-api-access-nrhwr\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.513010 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.513072 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-trusted-ca\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.513107 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.513169 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-registry-tls\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.513219 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-registry-certificates\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.553466 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.614461 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-registry-tls\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.614550 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-registry-certificates\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.614610 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-bound-sa-token\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.614716 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.614803 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrhwr\" (UniqueName: \"kubernetes.io/projected/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-kube-api-access-nrhwr\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.614876 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-trusted-ca\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.614933 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.615742 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.616815 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-trusted-ca\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.617387 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-registry-certificates\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.621767 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-registry-tls\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.622371 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.641372 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrhwr\" (UniqueName: \"kubernetes.io/projected/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-kube-api-access-nrhwr\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.643032 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c8fd8acb-0598-40bb-9e6d-2c194fc51b9d-bound-sa-token\") pod \"image-registry-66df7c8f76-vf2w6\" (UID: \"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d\") " pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:58 crc kubenswrapper[4835]: I0201 07:27:58.751840 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:59 crc kubenswrapper[4835]: I0201 07:27:59.213374 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vf2w6"] Feb 01 07:27:59 crc kubenswrapper[4835]: I0201 07:27:59.914836 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" event={"ID":"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d","Type":"ContainerStarted","Data":"071e2509ffd8cb3110efedb31c69070508669e9d5876115c0fa6fd27f476f51b"} Feb 01 07:27:59 crc kubenswrapper[4835]: I0201 07:27:59.914887 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" event={"ID":"c8fd8acb-0598-40bb-9e6d-2c194fc51b9d","Type":"ContainerStarted","Data":"d61341de0abb514b232bd4985f15ba4a6fa226486179fedabf3cd9d55c8ac98f"} Feb 01 07:27:59 crc kubenswrapper[4835]: I0201 07:27:59.915047 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:27:59 crc kubenswrapper[4835]: I0201 07:27:59.949024 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" podStartSLOduration=1.9490044709999998 podStartE2EDuration="1.949004471s" podCreationTimestamp="2026-02-01 07:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:27:59.943505023 +0000 UTC m=+353.063941467" watchObservedRunningTime="2026-02-01 07:27:59.949004471 +0000 UTC m=+353.069440915" Feb 01 07:28:15 crc kubenswrapper[4835]: I0201 07:28:15.505188 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84fd975466-sxqz2"] Feb 01 07:28:15 crc kubenswrapper[4835]: I0201 07:28:15.505920 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" podUID="10fad4bf-1fb3-4455-a349-fefb7f585c30" containerName="controller-manager" containerID="cri-o://6c09a7ee33cfbd1024e1b7e694abb6d0b5c45595282350e494516e60f2433aba" gracePeriod=30 Feb 01 07:28:15 crc kubenswrapper[4835]: I0201 07:28:15.928706 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.013215 4835 generic.go:334] "Generic (PLEG): container finished" podID="10fad4bf-1fb3-4455-a349-fefb7f585c30" containerID="6c09a7ee33cfbd1024e1b7e694abb6d0b5c45595282350e494516e60f2433aba" exitCode=0 Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.013295 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.013352 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" event={"ID":"10fad4bf-1fb3-4455-a349-fefb7f585c30","Type":"ContainerDied","Data":"6c09a7ee33cfbd1024e1b7e694abb6d0b5c45595282350e494516e60f2433aba"} Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.013974 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-84fd975466-sxqz2" event={"ID":"10fad4bf-1fb3-4455-a349-fefb7f585c30","Type":"ContainerDied","Data":"6f7177eac40a95ebcf05587a253beb2d53eb30227ac5417fca9d9b44c2b17f2d"} Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.013997 4835 scope.go:117] "RemoveContainer" containerID="6c09a7ee33cfbd1024e1b7e694abb6d0b5c45595282350e494516e60f2433aba" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.027748 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10fad4bf-1fb3-4455-a349-fefb7f585c30-serving-cert\") pod \"10fad4bf-1fb3-4455-a349-fefb7f585c30\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.027863 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-client-ca\") pod \"10fad4bf-1fb3-4455-a349-fefb7f585c30\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.027917 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t58dp\" (UniqueName: \"kubernetes.io/projected/10fad4bf-1fb3-4455-a349-fefb7f585c30-kube-api-access-t58dp\") pod \"10fad4bf-1fb3-4455-a349-fefb7f585c30\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.028012 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-proxy-ca-bundles\") pod \"10fad4bf-1fb3-4455-a349-fefb7f585c30\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.028697 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-config\") pod \"10fad4bf-1fb3-4455-a349-fefb7f585c30\" (UID: \"10fad4bf-1fb3-4455-a349-fefb7f585c30\") " Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.029140 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-client-ca" (OuterVolumeSpecName: "client-ca") pod "10fad4bf-1fb3-4455-a349-fefb7f585c30" (UID: "10fad4bf-1fb3-4455-a349-fefb7f585c30"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.029217 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "10fad4bf-1fb3-4455-a349-fefb7f585c30" (UID: "10fad4bf-1fb3-4455-a349-fefb7f585c30"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.029232 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-config" (OuterVolumeSpecName: "config") pod "10fad4bf-1fb3-4455-a349-fefb7f585c30" (UID: "10fad4bf-1fb3-4455-a349-fefb7f585c30"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.032915 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10fad4bf-1fb3-4455-a349-fefb7f585c30-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "10fad4bf-1fb3-4455-a349-fefb7f585c30" (UID: "10fad4bf-1fb3-4455-a349-fefb7f585c30"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.034178 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10fad4bf-1fb3-4455-a349-fefb7f585c30-kube-api-access-t58dp" (OuterVolumeSpecName: "kube-api-access-t58dp") pod "10fad4bf-1fb3-4455-a349-fefb7f585c30" (UID: "10fad4bf-1fb3-4455-a349-fefb7f585c30"). InnerVolumeSpecName "kube-api-access-t58dp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.048014 4835 scope.go:117] "RemoveContainer" containerID="6c09a7ee33cfbd1024e1b7e694abb6d0b5c45595282350e494516e60f2433aba" Feb 01 07:28:16 crc kubenswrapper[4835]: E0201 07:28:16.048542 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c09a7ee33cfbd1024e1b7e694abb6d0b5c45595282350e494516e60f2433aba\": container with ID starting with 6c09a7ee33cfbd1024e1b7e694abb6d0b5c45595282350e494516e60f2433aba not found: ID does not exist" containerID="6c09a7ee33cfbd1024e1b7e694abb6d0b5c45595282350e494516e60f2433aba" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.048600 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c09a7ee33cfbd1024e1b7e694abb6d0b5c45595282350e494516e60f2433aba"} err="failed to get container status \"6c09a7ee33cfbd1024e1b7e694abb6d0b5c45595282350e494516e60f2433aba\": rpc error: code = NotFound desc = could not find container \"6c09a7ee33cfbd1024e1b7e694abb6d0b5c45595282350e494516e60f2433aba\": container with ID starting with 6c09a7ee33cfbd1024e1b7e694abb6d0b5c45595282350e494516e60f2433aba not found: ID does not exist" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.130367 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10fad4bf-1fb3-4455-a349-fefb7f585c30-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.130428 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-client-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.130440 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t58dp\" (UniqueName: \"kubernetes.io/projected/10fad4bf-1fb3-4455-a349-fefb7f585c30-kube-api-access-t58dp\") on node \"crc\" DevicePath \"\"" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.130449 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.130459 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10fad4bf-1fb3-4455-a349-fefb7f585c30-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.360097 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-84fd975466-sxqz2"] Feb 01 07:28:16 crc kubenswrapper[4835]: I0201 07:28:16.366585 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-84fd975466-sxqz2"] Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.107818 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7767cd8d75-j5r8c"] Feb 01 07:28:17 crc kubenswrapper[4835]: E0201 07:28:17.108236 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10fad4bf-1fb3-4455-a349-fefb7f585c30" containerName="controller-manager" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.108252 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="10fad4bf-1fb3-4455-a349-fefb7f585c30" containerName="controller-manager" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.108363 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="10fad4bf-1fb3-4455-a349-fefb7f585c30" containerName="controller-manager" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.108862 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.110837 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.111117 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.111484 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.114828 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.114905 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.115021 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.121771 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.133866 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7767cd8d75-j5r8c"] Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.255351 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-proxy-ca-bundles\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.255912 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-client-ca\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.255993 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-serving-cert\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.256070 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-config\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.256178 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5vrz\" (UniqueName: \"kubernetes.io/projected/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-kube-api-access-w5vrz\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.357561 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-client-ca\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.357615 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-serving-cert\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.357653 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-config\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.357726 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5vrz\" (UniqueName: \"kubernetes.io/projected/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-kube-api-access-w5vrz\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.357767 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-proxy-ca-bundles\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.358930 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-client-ca\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.359034 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-proxy-ca-bundles\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.359661 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-config\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.362996 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-serving-cert\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.374990 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5vrz\" (UniqueName: \"kubernetes.io/projected/9dfc9814-856f-4e2b-ac49-32b78a2d0b7c-kube-api-access-w5vrz\") pod \"controller-manager-7767cd8d75-j5r8c\" (UID: \"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c\") " pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.427234 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.578316 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10fad4bf-1fb3-4455-a349-fefb7f585c30" path="/var/lib/kubelet/pods/10fad4bf-1fb3-4455-a349-fefb7f585c30/volumes" Feb 01 07:28:17 crc kubenswrapper[4835]: I0201 07:28:17.618690 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7767cd8d75-j5r8c"] Feb 01 07:28:17 crc kubenswrapper[4835]: W0201 07:28:17.628443 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dfc9814_856f_4e2b_ac49_32b78a2d0b7c.slice/crio-410d7c0289a65fc1f7a68d20d31e1eda02fcc1ac6cc33c3da8a4f8d1bdf75734 WatchSource:0}: Error finding container 410d7c0289a65fc1f7a68d20d31e1eda02fcc1ac6cc33c3da8a4f8d1bdf75734: Status 404 returned error can't find the container with id 410d7c0289a65fc1f7a68d20d31e1eda02fcc1ac6cc33c3da8a4f8d1bdf75734 Feb 01 07:28:18 crc kubenswrapper[4835]: I0201 07:28:18.026825 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" event={"ID":"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c","Type":"ContainerStarted","Data":"0a89d58a187e594dd324d0508e36cee170d5705151a3f1083249f57f47db8f94"} Feb 01 07:28:18 crc kubenswrapper[4835]: I0201 07:28:18.027208 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" event={"ID":"9dfc9814-856f-4e2b-ac49-32b78a2d0b7c","Type":"ContainerStarted","Data":"410d7c0289a65fc1f7a68d20d31e1eda02fcc1ac6cc33c3da8a4f8d1bdf75734"} Feb 01 07:28:18 crc kubenswrapper[4835]: I0201 07:28:18.027613 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:18 crc kubenswrapper[4835]: I0201 07:28:18.047238 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" Feb 01 07:28:18 crc kubenswrapper[4835]: I0201 07:28:18.049679 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7767cd8d75-j5r8c" podStartSLOduration=3.04965885 podStartE2EDuration="3.04965885s" podCreationTimestamp="2026-02-01 07:28:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:28:18.045893289 +0000 UTC m=+371.166329723" watchObservedRunningTime="2026-02-01 07:28:18.04965885 +0000 UTC m=+371.170095284" Feb 01 07:28:18 crc kubenswrapper[4835]: I0201 07:28:18.762348 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-vf2w6" Feb 01 07:28:18 crc kubenswrapper[4835]: I0201 07:28:18.833957 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-66fqg"] Feb 01 07:28:25 crc kubenswrapper[4835]: I0201 07:28:25.192512 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:28:25 crc kubenswrapper[4835]: I0201 07:28:25.193146 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:28:43 crc kubenswrapper[4835]: I0201 07:28:43.893050 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" podUID="ac521dca-2154-40bb-bbdb-a22e3d6abd72" containerName="registry" containerID="cri-o://3f33f19419e62411bac7a2082cf36c839014695310e5de008fdbd44a3e0eba81" gracePeriod=30 Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.201826 4835 generic.go:334] "Generic (PLEG): container finished" podID="ac521dca-2154-40bb-bbdb-a22e3d6abd72" containerID="3f33f19419e62411bac7a2082cf36c839014695310e5de008fdbd44a3e0eba81" exitCode=0 Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.201964 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" event={"ID":"ac521dca-2154-40bb-bbdb-a22e3d6abd72","Type":"ContainerDied","Data":"3f33f19419e62411bac7a2082cf36c839014695310e5de008fdbd44a3e0eba81"} Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.435658 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.624488 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ac521dca-2154-40bb-bbdb-a22e3d6abd72-registry-certificates\") pod \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.624558 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-registry-tls\") pod \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.624615 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-bound-sa-token\") pod \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.624667 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ac521dca-2154-40bb-bbdb-a22e3d6abd72-ca-trust-extracted\") pod \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.624893 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.624940 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac521dca-2154-40bb-bbdb-a22e3d6abd72-trusted-ca\") pod \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.625034 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ac521dca-2154-40bb-bbdb-a22e3d6abd72-installation-pull-secrets\") pod \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.625126 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7bnj\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-kube-api-access-w7bnj\") pod \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\" (UID: \"ac521dca-2154-40bb-bbdb-a22e3d6abd72\") " Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.626290 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac521dca-2154-40bb-bbdb-a22e3d6abd72-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "ac521dca-2154-40bb-bbdb-a22e3d6abd72" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.626315 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac521dca-2154-40bb-bbdb-a22e3d6abd72-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ac521dca-2154-40bb-bbdb-a22e3d6abd72" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.627383 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ac521dca-2154-40bb-bbdb-a22e3d6abd72-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.627732 4835 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ac521dca-2154-40bb-bbdb-a22e3d6abd72-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.631440 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "ac521dca-2154-40bb-bbdb-a22e3d6abd72" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.637047 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac521dca-2154-40bb-bbdb-a22e3d6abd72-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "ac521dca-2154-40bb-bbdb-a22e3d6abd72" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.637956 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "ac521dca-2154-40bb-bbdb-a22e3d6abd72" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.638034 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-kube-api-access-w7bnj" (OuterVolumeSpecName: "kube-api-access-w7bnj") pod "ac521dca-2154-40bb-bbdb-a22e3d6abd72" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72"). InnerVolumeSpecName "kube-api-access-w7bnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.642070 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "ac521dca-2154-40bb-bbdb-a22e3d6abd72" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.659574 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac521dca-2154-40bb-bbdb-a22e3d6abd72-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "ac521dca-2154-40bb-bbdb-a22e3d6abd72" (UID: "ac521dca-2154-40bb-bbdb-a22e3d6abd72"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.731043 4835 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.731955 4835 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ac521dca-2154-40bb-bbdb-a22e3d6abd72-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.731977 4835 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ac521dca-2154-40bb-bbdb-a22e3d6abd72-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.731997 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7bnj\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-kube-api-access-w7bnj\") on node \"crc\" DevicePath \"\"" Feb 01 07:28:44 crc kubenswrapper[4835]: I0201 07:28:44.732017 4835 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ac521dca-2154-40bb-bbdb-a22e3d6abd72-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 01 07:28:45 crc kubenswrapper[4835]: I0201 07:28:45.211727 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" event={"ID":"ac521dca-2154-40bb-bbdb-a22e3d6abd72","Type":"ContainerDied","Data":"7009647035bcb9b3d9a9385f910f574abe92ca7bc6f2836a8743b47eb765ed4a"} Feb 01 07:28:45 crc kubenswrapper[4835]: I0201 07:28:45.211784 4835 scope.go:117] "RemoveContainer" containerID="3f33f19419e62411bac7a2082cf36c839014695310e5de008fdbd44a3e0eba81" Feb 01 07:28:45 crc kubenswrapper[4835]: I0201 07:28:45.211834 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-66fqg" Feb 01 07:28:45 crc kubenswrapper[4835]: I0201 07:28:45.263550 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-66fqg"] Feb 01 07:28:45 crc kubenswrapper[4835]: I0201 07:28:45.270830 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-66fqg"] Feb 01 07:28:45 crc kubenswrapper[4835]: I0201 07:28:45.578813 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac521dca-2154-40bb-bbdb-a22e3d6abd72" path="/var/lib/kubelet/pods/ac521dca-2154-40bb-bbdb-a22e3d6abd72/volumes" Feb 01 07:28:55 crc kubenswrapper[4835]: I0201 07:28:55.191614 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:28:55 crc kubenswrapper[4835]: I0201 07:28:55.192369 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:29:25 crc kubenswrapper[4835]: I0201 07:29:25.192286 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:29:25 crc kubenswrapper[4835]: I0201 07:29:25.193053 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:29:25 crc kubenswrapper[4835]: I0201 07:29:25.193125 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:29:25 crc kubenswrapper[4835]: I0201 07:29:25.194179 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9e3104eb77be3b50140e525cdfbf7f55a456b28fd34136df6dc0b6920b3a97bf"} pod="openshift-machine-config-operator/machine-config-daemon-wdt78" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 07:29:25 crc kubenswrapper[4835]: I0201 07:29:25.194297 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" containerID="cri-o://9e3104eb77be3b50140e525cdfbf7f55a456b28fd34136df6dc0b6920b3a97bf" gracePeriod=600 Feb 01 07:29:25 crc kubenswrapper[4835]: I0201 07:29:25.487759 4835 generic.go:334] "Generic (PLEG): container finished" podID="303c450e-4b2d-4908-84e6-df8b444ed640" containerID="9e3104eb77be3b50140e525cdfbf7f55a456b28fd34136df6dc0b6920b3a97bf" exitCode=0 Feb 01 07:29:25 crc kubenswrapper[4835]: I0201 07:29:25.487927 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerDied","Data":"9e3104eb77be3b50140e525cdfbf7f55a456b28fd34136df6dc0b6920b3a97bf"} Feb 01 07:29:25 crc kubenswrapper[4835]: I0201 07:29:25.488105 4835 scope.go:117] "RemoveContainer" containerID="b5eafd5efc64523e979e8179e037eae1d437a5546d7e4f763b9fdbd61e39add5" Feb 01 07:29:26 crc kubenswrapper[4835]: I0201 07:29:26.497504 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"377901096f8562233e3d8083b0c24e7e0a643028b79ddd39edcc7cb8ec54319f"} Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.208157 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z"] Feb 01 07:30:00 crc kubenswrapper[4835]: E0201 07:30:00.209603 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac521dca-2154-40bb-bbdb-a22e3d6abd72" containerName="registry" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.209661 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac521dca-2154-40bb-bbdb-a22e3d6abd72" containerName="registry" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.209907 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac521dca-2154-40bb-bbdb-a22e3d6abd72" containerName="registry" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.211166 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.214039 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.214201 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.216880 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z"] Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.245909 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-secret-volume\") pod \"collect-profiles-29498850-84h7z\" (UID: \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.246002 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h79cd\" (UniqueName: \"kubernetes.io/projected/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-kube-api-access-h79cd\") pod \"collect-profiles-29498850-84h7z\" (UID: \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.246052 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-config-volume\") pod \"collect-profiles-29498850-84h7z\" (UID: \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.346996 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-secret-volume\") pod \"collect-profiles-29498850-84h7z\" (UID: \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.347062 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h79cd\" (UniqueName: \"kubernetes.io/projected/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-kube-api-access-h79cd\") pod \"collect-profiles-29498850-84h7z\" (UID: \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.347100 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-config-volume\") pod \"collect-profiles-29498850-84h7z\" (UID: \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.348247 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-config-volume\") pod \"collect-profiles-29498850-84h7z\" (UID: \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.361100 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-secret-volume\") pod \"collect-profiles-29498850-84h7z\" (UID: \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.379713 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h79cd\" (UniqueName: \"kubernetes.io/projected/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-kube-api-access-h79cd\") pod \"collect-profiles-29498850-84h7z\" (UID: \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" Feb 01 07:30:00 crc kubenswrapper[4835]: I0201 07:30:00.540094 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" Feb 01 07:30:01 crc kubenswrapper[4835]: I0201 07:30:01.029108 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z"] Feb 01 07:30:01 crc kubenswrapper[4835]: I0201 07:30:01.741509 4835 generic.go:334] "Generic (PLEG): container finished" podID="2a3f2951-1c06-484a-9c2e-502d2adaa6cd" containerID="6a8f3f0f8045324c04ea0f25d07e785228bc538f428f47c8c77a96101a2d3e96" exitCode=0 Feb 01 07:30:01 crc kubenswrapper[4835]: I0201 07:30:01.741550 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" event={"ID":"2a3f2951-1c06-484a-9c2e-502d2adaa6cd","Type":"ContainerDied","Data":"6a8f3f0f8045324c04ea0f25d07e785228bc538f428f47c8c77a96101a2d3e96"} Feb 01 07:30:01 crc kubenswrapper[4835]: I0201 07:30:01.741573 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" event={"ID":"2a3f2951-1c06-484a-9c2e-502d2adaa6cd","Type":"ContainerStarted","Data":"5bf646b8fc2ede47108ed327acdc22c029c65c6b2d07abd2b9f281fee0ab2314"} Feb 01 07:30:03 crc kubenswrapper[4835]: I0201 07:30:03.099569 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" Feb 01 07:30:03 crc kubenswrapper[4835]: I0201 07:30:03.208870 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h79cd\" (UniqueName: \"kubernetes.io/projected/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-kube-api-access-h79cd\") pod \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\" (UID: \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\") " Feb 01 07:30:03 crc kubenswrapper[4835]: I0201 07:30:03.208943 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-config-volume\") pod \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\" (UID: \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\") " Feb 01 07:30:03 crc kubenswrapper[4835]: I0201 07:30:03.209084 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-secret-volume\") pod \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\" (UID: \"2a3f2951-1c06-484a-9c2e-502d2adaa6cd\") " Feb 01 07:30:03 crc kubenswrapper[4835]: I0201 07:30:03.210957 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-config-volume" (OuterVolumeSpecName: "config-volume") pod "2a3f2951-1c06-484a-9c2e-502d2adaa6cd" (UID: "2a3f2951-1c06-484a-9c2e-502d2adaa6cd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:30:03 crc kubenswrapper[4835]: I0201 07:30:03.219453 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-kube-api-access-h79cd" (OuterVolumeSpecName: "kube-api-access-h79cd") pod "2a3f2951-1c06-484a-9c2e-502d2adaa6cd" (UID: "2a3f2951-1c06-484a-9c2e-502d2adaa6cd"). InnerVolumeSpecName "kube-api-access-h79cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:30:03 crc kubenswrapper[4835]: I0201 07:30:03.220403 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2a3f2951-1c06-484a-9c2e-502d2adaa6cd" (UID: "2a3f2951-1c06-484a-9c2e-502d2adaa6cd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:30:03 crc kubenswrapper[4835]: I0201 07:30:03.310286 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h79cd\" (UniqueName: \"kubernetes.io/projected/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-kube-api-access-h79cd\") on node \"crc\" DevicePath \"\"" Feb 01 07:30:03 crc kubenswrapper[4835]: I0201 07:30:03.310717 4835 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-config-volume\") on node \"crc\" DevicePath \"\"" Feb 01 07:30:03 crc kubenswrapper[4835]: I0201 07:30:03.310731 4835 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a3f2951-1c06-484a-9c2e-502d2adaa6cd-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 01 07:30:03 crc kubenswrapper[4835]: I0201 07:30:03.757456 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" event={"ID":"2a3f2951-1c06-484a-9c2e-502d2adaa6cd","Type":"ContainerDied","Data":"5bf646b8fc2ede47108ed327acdc22c029c65c6b2d07abd2b9f281fee0ab2314"} Feb 01 07:30:03 crc kubenswrapper[4835]: I0201 07:30:03.757511 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bf646b8fc2ede47108ed327acdc22c029c65c6b2d07abd2b9f281fee0ab2314" Feb 01 07:30:03 crc kubenswrapper[4835]: I0201 07:30:03.757520 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498850-84h7z" Feb 01 07:31:25 crc kubenswrapper[4835]: I0201 07:31:25.192022 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:31:25 crc kubenswrapper[4835]: I0201 07:31:25.193935 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.266030 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5z5dl"] Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.268130 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovn-controller" containerID="cri-o://8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc" gracePeriod=30 Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.268272 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="northd" containerID="cri-o://c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4" gracePeriod=30 Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.268247 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="sbdb" containerID="cri-o://85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227" gracePeriod=30 Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.268170 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="nbdb" containerID="cri-o://0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514" gracePeriod=30 Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.268436 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="kube-rbac-proxy-node" containerID="cri-o://044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84" gracePeriod=30 Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.268357 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovn-acl-logging" containerID="cri-o://03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc" gracePeriod=30 Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.268358 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc" gracePeriod=30 Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.374336 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" containerID="cri-o://a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca" gracePeriod=30 Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.622193 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/3.log" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.624378 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovn-acl-logging/0.log" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.624809 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovn-controller/0.log" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.625264 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673149 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mdtv2"] Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673376 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovn-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673392 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovn-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673406 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673430 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673442 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="northd" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673449 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="northd" Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673461 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="kubecfg-setup" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673468 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="kubecfg-setup" Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673482 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="kube-rbac-proxy-node" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673489 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="kube-rbac-proxy-node" Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673500 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673507 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673515 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673522 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673531 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a3f2951-1c06-484a-9c2e-502d2adaa6cd" containerName="collect-profiles" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673539 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a3f2951-1c06-484a-9c2e-502d2adaa6cd" containerName="collect-profiles" Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673552 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="kube-rbac-proxy-ovn-metrics" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673559 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="kube-rbac-proxy-ovn-metrics" Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673568 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovn-acl-logging" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673574 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovn-acl-logging" Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673584 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="nbdb" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673592 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="nbdb" Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673601 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="sbdb" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673608 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="sbdb" Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673618 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673625 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673723 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="northd" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673738 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673747 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="nbdb" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673755 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovn-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673765 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673772 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673782 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673790 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="kube-rbac-proxy-ovn-metrics" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673800 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a3f2951-1c06-484a-9c2e-502d2adaa6cd" containerName="collect-profiles" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673808 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="sbdb" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673816 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovn-acl-logging" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673824 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="kube-rbac-proxy-node" Feb 01 07:31:30 crc kubenswrapper[4835]: E0201 07:31:30.673927 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.673936 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.675626 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerName="ovnkube-controller" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.677827 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.685688 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-env-overrides\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.685783 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovn-node-metrics-cert\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.685832 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-log-socket\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.685878 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-systemd-units\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.685944 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-cni-bin\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.685973 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-log-socket" (OuterVolumeSpecName: "log-socket") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686004 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-ovn\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686015 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686039 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686060 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-etc-openvswitch\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686108 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686115 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686144 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-kubelet\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686167 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686200 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovnkube-config\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686215 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686254 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-run-ovn-kubernetes\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686299 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-var-lib-openvswitch\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686323 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686371 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-cni-netd\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686438 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686451 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-slash\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686476 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686501 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovnkube-script-lib\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686551 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686606 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-run-netns\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686661 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-openvswitch\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686698 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-systemd\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686761 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x78ft\" (UniqueName: \"kubernetes.io/projected/bd62f19b-07ab-4cc5-84a3-2f097c278de7-kube-api-access-x78ft\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686800 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-node-log\") pod \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\" (UID: \"bd62f19b-07ab-4cc5-84a3-2f097c278de7\") " Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686512 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-slash" (OuterVolumeSpecName: "host-slash") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686803 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686805 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.686825 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687025 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687086 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-node-log" (OuterVolumeSpecName: "node-log") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687226 4835 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687256 4835 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-node-log\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687280 4835 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687342 4835 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-log-socket\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687369 4835 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687391 4835 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687443 4835 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687468 4835 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687495 4835 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687521 4835 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687547 4835 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687570 4835 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687714 4835 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-slash\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687743 4835 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687825 4835 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687850 4835 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.687734 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.692604 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd62f19b-07ab-4cc5-84a3-2f097c278de7-kube-api-access-x78ft" (OuterVolumeSpecName: "kube-api-access-x78ft") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "kube-api-access-x78ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.692771 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.717880 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "bd62f19b-07ab-4cc5-84a3-2f097c278de7" (UID: "bd62f19b-07ab-4cc5-84a3-2f097c278de7"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.788702 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d5ddf04a-df44-470d-bed4-da3b619f9bf9-ovnkube-config\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.788752 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-slash\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.788775 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-log-socket\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.788805 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-cni-netd\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.788827 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-run-netns\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.788955 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d5ddf04a-df44-470d-bed4-da3b619f9bf9-ovnkube-script-lib\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789107 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d5ddf04a-df44-470d-bed4-da3b619f9bf9-ovn-node-metrics-cert\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789189 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-var-lib-openvswitch\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789224 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-kubelet\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789268 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-run-ovn\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789294 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-run-ovn-kubernetes\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789320 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-cni-bin\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789347 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-etc-openvswitch\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789429 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d5ddf04a-df44-470d-bed4-da3b619f9bf9-env-overrides\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789484 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-systemd-units\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789531 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mjv9\" (UniqueName: \"kubernetes.io/projected/d5ddf04a-df44-470d-bed4-da3b619f9bf9-kube-api-access-4mjv9\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789570 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789612 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-node-log\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789663 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-run-systemd\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789702 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-run-openvswitch\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789778 4835 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd62f19b-07ab-4cc5-84a3-2f097c278de7-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789802 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x78ft\" (UniqueName: \"kubernetes.io/projected/bd62f19b-07ab-4cc5-84a3-2f097c278de7-kube-api-access-x78ft\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789820 4835 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.789875 4835 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd62f19b-07ab-4cc5-84a3-2f097c278de7-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.890962 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d5ddf04a-df44-470d-bed4-da3b619f9bf9-env-overrides\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891051 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-systemd-units\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891111 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mjv9\" (UniqueName: \"kubernetes.io/projected/d5ddf04a-df44-470d-bed4-da3b619f9bf9-kube-api-access-4mjv9\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891151 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891189 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-node-log\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891234 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-run-systemd\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891264 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-run-openvswitch\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891256 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-systemd-units\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891302 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d5ddf04a-df44-470d-bed4-da3b619f9bf9-ovnkube-config\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891466 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-slash\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891515 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-log-socket\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891616 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-cni-netd\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891642 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-node-log\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891676 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-run-netns\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891687 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891718 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-cni-netd\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891742 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-log-socket\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891746 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-run-systemd\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891767 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-run-openvswitch\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891615 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-slash\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891792 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-run-netns\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891821 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d5ddf04a-df44-470d-bed4-da3b619f9bf9-ovnkube-script-lib\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891870 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d5ddf04a-df44-470d-bed4-da3b619f9bf9-ovn-node-metrics-cert\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891890 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-var-lib-openvswitch\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891907 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-kubelet\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891923 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-run-ovn\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891936 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-run-ovn-kubernetes\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891950 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-cni-bin\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.891965 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-etc-openvswitch\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.892007 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-etc-openvswitch\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.892026 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d5ddf04a-df44-470d-bed4-da3b619f9bf9-env-overrides\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.892121 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-run-ovn\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.892181 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-kubelet\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.892179 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-var-lib-openvswitch\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.892237 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-run-ovn-kubernetes\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.892254 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d5ddf04a-df44-470d-bed4-da3b619f9bf9-host-cni-bin\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.892611 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d5ddf04a-df44-470d-bed4-da3b619f9bf9-ovnkube-config\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.893121 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d5ddf04a-df44-470d-bed4-da3b619f9bf9-ovnkube-script-lib\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.897091 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d5ddf04a-df44-470d-bed4-da3b619f9bf9-ovn-node-metrics-cert\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.925630 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mjv9\" (UniqueName: \"kubernetes.io/projected/d5ddf04a-df44-470d-bed4-da3b619f9bf9-kube-api-access-4mjv9\") pod \"ovnkube-node-mdtv2\" (UID: \"d5ddf04a-df44-470d-bed4-da3b619f9bf9\") " pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:30 crc kubenswrapper[4835]: I0201 07:31:30.990457 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.375118 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-25s9j_c9342eb7-b5ae-47b2-a56d-91ae886e5f0e/kube-multus/2.log" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.376692 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-25s9j_c9342eb7-b5ae-47b2-a56d-91ae886e5f0e/kube-multus/1.log" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.376793 4835 generic.go:334] "Generic (PLEG): container finished" podID="c9342eb7-b5ae-47b2-a56d-91ae886e5f0e" containerID="bc898c375e02b77f5d0608257a9dc49631ac50c8ceab7e6be8a7327889f64c22" exitCode=2 Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.376893 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-25s9j" event={"ID":"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e","Type":"ContainerDied","Data":"bc898c375e02b77f5d0608257a9dc49631ac50c8ceab7e6be8a7327889f64c22"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.376960 4835 scope.go:117] "RemoveContainer" containerID="c7f67e3606f318159aa33593125d45284e9277e6418b039476366b909aa6cf27" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.377553 4835 scope.go:117] "RemoveContainer" containerID="bc898c375e02b77f5d0608257a9dc49631ac50c8ceab7e6be8a7327889f64c22" Feb 01 07:31:31 crc kubenswrapper[4835]: E0201 07:31:31.377852 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-25s9j_openshift-multus(c9342eb7-b5ae-47b2-a56d-91ae886e5f0e)\"" pod="openshift-multus/multus-25s9j" podUID="c9342eb7-b5ae-47b2-a56d-91ae886e5f0e" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.380517 4835 generic.go:334] "Generic (PLEG): container finished" podID="d5ddf04a-df44-470d-bed4-da3b619f9bf9" containerID="24f5512aa6b4417e804a55252efc5ac2377797792510fedd6d27d314b906fe74" exitCode=0 Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.380593 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" event={"ID":"d5ddf04a-df44-470d-bed4-da3b619f9bf9","Type":"ContainerDied","Data":"24f5512aa6b4417e804a55252efc5ac2377797792510fedd6d27d314b906fe74"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.380636 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" event={"ID":"d5ddf04a-df44-470d-bed4-da3b619f9bf9","Type":"ContainerStarted","Data":"b5514c320c794ac078eefc8f358925be6b5d029b1381514b07ff5668586492a3"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.383851 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovnkube-controller/3.log" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.390584 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovn-acl-logging/0.log" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.391912 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5z5dl_bd62f19b-07ab-4cc5-84a3-2f097c278de7/ovn-controller/0.log" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393012 4835 generic.go:334] "Generic (PLEG): container finished" podID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerID="a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca" exitCode=0 Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393067 4835 generic.go:334] "Generic (PLEG): container finished" podID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerID="85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227" exitCode=0 Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393080 4835 generic.go:334] "Generic (PLEG): container finished" podID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerID="0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514" exitCode=0 Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393095 4835 generic.go:334] "Generic (PLEG): container finished" podID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerID="c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4" exitCode=0 Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393105 4835 generic.go:334] "Generic (PLEG): container finished" podID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerID="1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc" exitCode=0 Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393147 4835 generic.go:334] "Generic (PLEG): container finished" podID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerID="044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84" exitCode=0 Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393159 4835 generic.go:334] "Generic (PLEG): container finished" podID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerID="03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc" exitCode=143 Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393172 4835 generic.go:334] "Generic (PLEG): container finished" podID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" containerID="8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc" exitCode=143 Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393226 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393264 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393315 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393337 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393352 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393394 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393457 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393472 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393482 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393491 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393500 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393620 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393631 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393639 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393647 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393801 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393817 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393834 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393844 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393853 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393863 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393872 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393880 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393890 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393898 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.393956 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394017 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394033 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394130 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394149 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394160 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394230 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394239 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394247 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394255 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394263 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394271 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394279 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394293 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" event={"ID":"bd62f19b-07ab-4cc5-84a3-2f097c278de7","Type":"ContainerDied","Data":"f2c33318aecd4d2a27c36deae504704dd76ecedc9768925c3ee036665f4c99e8"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394310 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394320 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394328 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394337 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394346 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394381 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394391 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394399 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394445 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394459 4835 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764"} Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.394650 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5z5dl" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.436696 4835 scope.go:117] "RemoveContainer" containerID="a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.492691 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5z5dl"] Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.494613 4835 scope.go:117] "RemoveContainer" containerID="9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.496552 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5z5dl"] Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.536692 4835 scope.go:117] "RemoveContainer" containerID="85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.558058 4835 scope.go:117] "RemoveContainer" containerID="0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.581752 4835 scope.go:117] "RemoveContainer" containerID="c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.585772 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd62f19b-07ab-4cc5-84a3-2f097c278de7" path="/var/lib/kubelet/pods/bd62f19b-07ab-4cc5-84a3-2f097c278de7/volumes" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.618980 4835 scope.go:117] "RemoveContainer" containerID="1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.642595 4835 scope.go:117] "RemoveContainer" containerID="044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.671542 4835 scope.go:117] "RemoveContainer" containerID="03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.690602 4835 scope.go:117] "RemoveContainer" containerID="8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.721082 4835 scope.go:117] "RemoveContainer" containerID="b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.739035 4835 scope.go:117] "RemoveContainer" containerID="a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca" Feb 01 07:31:31 crc kubenswrapper[4835]: E0201 07:31:31.741967 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca\": container with ID starting with a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca not found: ID does not exist" containerID="a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.742014 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca"} err="failed to get container status \"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca\": rpc error: code = NotFound desc = could not find container \"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca\": container with ID starting with a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.742040 4835 scope.go:117] "RemoveContainer" containerID="9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe" Feb 01 07:31:31 crc kubenswrapper[4835]: E0201 07:31:31.742447 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\": container with ID starting with 9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe not found: ID does not exist" containerID="9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.742483 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe"} err="failed to get container status \"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\": rpc error: code = NotFound desc = could not find container \"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\": container with ID starting with 9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.742504 4835 scope.go:117] "RemoveContainer" containerID="85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227" Feb 01 07:31:31 crc kubenswrapper[4835]: E0201 07:31:31.742849 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\": container with ID starting with 85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227 not found: ID does not exist" containerID="85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.742883 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227"} err="failed to get container status \"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\": rpc error: code = NotFound desc = could not find container \"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\": container with ID starting with 85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.742941 4835 scope.go:117] "RemoveContainer" containerID="0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514" Feb 01 07:31:31 crc kubenswrapper[4835]: E0201 07:31:31.743325 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\": container with ID starting with 0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514 not found: ID does not exist" containerID="0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.743349 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514"} err="failed to get container status \"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\": rpc error: code = NotFound desc = could not find container \"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\": container with ID starting with 0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.743366 4835 scope.go:117] "RemoveContainer" containerID="c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4" Feb 01 07:31:31 crc kubenswrapper[4835]: E0201 07:31:31.743595 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\": container with ID starting with c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4 not found: ID does not exist" containerID="c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.743618 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4"} err="failed to get container status \"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\": rpc error: code = NotFound desc = could not find container \"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\": container with ID starting with c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.743632 4835 scope.go:117] "RemoveContainer" containerID="1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc" Feb 01 07:31:31 crc kubenswrapper[4835]: E0201 07:31:31.743820 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\": container with ID starting with 1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc not found: ID does not exist" containerID="1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.743845 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc"} err="failed to get container status \"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\": rpc error: code = NotFound desc = could not find container \"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\": container with ID starting with 1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.743859 4835 scope.go:117] "RemoveContainer" containerID="044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84" Feb 01 07:31:31 crc kubenswrapper[4835]: E0201 07:31:31.744092 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\": container with ID starting with 044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84 not found: ID does not exist" containerID="044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.744112 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84"} err="failed to get container status \"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\": rpc error: code = NotFound desc = could not find container \"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\": container with ID starting with 044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.744125 4835 scope.go:117] "RemoveContainer" containerID="03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc" Feb 01 07:31:31 crc kubenswrapper[4835]: E0201 07:31:31.744518 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\": container with ID starting with 03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc not found: ID does not exist" containerID="03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.744542 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc"} err="failed to get container status \"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\": rpc error: code = NotFound desc = could not find container \"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\": container with ID starting with 03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.744556 4835 scope.go:117] "RemoveContainer" containerID="8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc" Feb 01 07:31:31 crc kubenswrapper[4835]: E0201 07:31:31.744798 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\": container with ID starting with 8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc not found: ID does not exist" containerID="8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.744819 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc"} err="failed to get container status \"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\": rpc error: code = NotFound desc = could not find container \"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\": container with ID starting with 8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.744831 4835 scope.go:117] "RemoveContainer" containerID="b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764" Feb 01 07:31:31 crc kubenswrapper[4835]: E0201 07:31:31.745126 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\": container with ID starting with b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764 not found: ID does not exist" containerID="b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.745151 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764"} err="failed to get container status \"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\": rpc error: code = NotFound desc = could not find container \"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\": container with ID starting with b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.745188 4835 scope.go:117] "RemoveContainer" containerID="a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.745477 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca"} err="failed to get container status \"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca\": rpc error: code = NotFound desc = could not find container \"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca\": container with ID starting with a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.745496 4835 scope.go:117] "RemoveContainer" containerID="9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.745814 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe"} err="failed to get container status \"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\": rpc error: code = NotFound desc = could not find container \"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\": container with ID starting with 9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.745836 4835 scope.go:117] "RemoveContainer" containerID="85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.746107 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227"} err="failed to get container status \"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\": rpc error: code = NotFound desc = could not find container \"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\": container with ID starting with 85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.746126 4835 scope.go:117] "RemoveContainer" containerID="0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.746371 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514"} err="failed to get container status \"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\": rpc error: code = NotFound desc = could not find container \"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\": container with ID starting with 0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.746391 4835 scope.go:117] "RemoveContainer" containerID="c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.746695 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4"} err="failed to get container status \"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\": rpc error: code = NotFound desc = could not find container \"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\": container with ID starting with c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.746709 4835 scope.go:117] "RemoveContainer" containerID="1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.746893 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc"} err="failed to get container status \"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\": rpc error: code = NotFound desc = could not find container \"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\": container with ID starting with 1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.746911 4835 scope.go:117] "RemoveContainer" containerID="044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.747119 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84"} err="failed to get container status \"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\": rpc error: code = NotFound desc = could not find container \"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\": container with ID starting with 044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.747140 4835 scope.go:117] "RemoveContainer" containerID="03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.747347 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc"} err="failed to get container status \"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\": rpc error: code = NotFound desc = could not find container \"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\": container with ID starting with 03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.747372 4835 scope.go:117] "RemoveContainer" containerID="8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.747669 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc"} err="failed to get container status \"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\": rpc error: code = NotFound desc = could not find container \"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\": container with ID starting with 8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.747712 4835 scope.go:117] "RemoveContainer" containerID="b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.747901 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764"} err="failed to get container status \"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\": rpc error: code = NotFound desc = could not find container \"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\": container with ID starting with b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.747915 4835 scope.go:117] "RemoveContainer" containerID="a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.748115 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca"} err="failed to get container status \"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca\": rpc error: code = NotFound desc = could not find container \"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca\": container with ID starting with a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.748135 4835 scope.go:117] "RemoveContainer" containerID="9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.748363 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe"} err="failed to get container status \"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\": rpc error: code = NotFound desc = could not find container \"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\": container with ID starting with 9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.748382 4835 scope.go:117] "RemoveContainer" containerID="85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.748574 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227"} err="failed to get container status \"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\": rpc error: code = NotFound desc = could not find container \"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\": container with ID starting with 85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.748592 4835 scope.go:117] "RemoveContainer" containerID="0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.748778 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514"} err="failed to get container status \"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\": rpc error: code = NotFound desc = could not find container \"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\": container with ID starting with 0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.748796 4835 scope.go:117] "RemoveContainer" containerID="c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.749054 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4"} err="failed to get container status \"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\": rpc error: code = NotFound desc = could not find container \"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\": container with ID starting with c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.749083 4835 scope.go:117] "RemoveContainer" containerID="1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.749362 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc"} err="failed to get container status \"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\": rpc error: code = NotFound desc = could not find container \"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\": container with ID starting with 1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.749397 4835 scope.go:117] "RemoveContainer" containerID="044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.749719 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84"} err="failed to get container status \"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\": rpc error: code = NotFound desc = could not find container \"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\": container with ID starting with 044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.749779 4835 scope.go:117] "RemoveContainer" containerID="03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.749977 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc"} err="failed to get container status \"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\": rpc error: code = NotFound desc = could not find container \"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\": container with ID starting with 03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.749997 4835 scope.go:117] "RemoveContainer" containerID="8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.750230 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc"} err="failed to get container status \"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\": rpc error: code = NotFound desc = could not find container \"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\": container with ID starting with 8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.750250 4835 scope.go:117] "RemoveContainer" containerID="b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.750495 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764"} err="failed to get container status \"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\": rpc error: code = NotFound desc = could not find container \"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\": container with ID starting with b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.750511 4835 scope.go:117] "RemoveContainer" containerID="a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.750696 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca"} err="failed to get container status \"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca\": rpc error: code = NotFound desc = could not find container \"a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca\": container with ID starting with a53fa80b45c7ae4ed942ce4accd3b8e7a245e9e9af47c65395d277aa2373c7ca not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.750716 4835 scope.go:117] "RemoveContainer" containerID="9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.750969 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe"} err="failed to get container status \"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\": rpc error: code = NotFound desc = could not find container \"9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe\": container with ID starting with 9fdd6053ce9cfe758671cef50a3c3831ce22d8f3841a636238cd164e40f765fe not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.750986 4835 scope.go:117] "RemoveContainer" containerID="85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.751157 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227"} err="failed to get container status \"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\": rpc error: code = NotFound desc = could not find container \"85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227\": container with ID starting with 85485aabd6a53c0e1ef4cd95ad22cb0920d6efcbe61e3ddb00a34f40a4910227 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.751173 4835 scope.go:117] "RemoveContainer" containerID="0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.751348 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514"} err="failed to get container status \"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\": rpc error: code = NotFound desc = could not find container \"0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514\": container with ID starting with 0b95716d13c607914bd1b02e76db8d358ebb37df5ab77a1cf7fc24b7c4e61514 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.751372 4835 scope.go:117] "RemoveContainer" containerID="c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.751680 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4"} err="failed to get container status \"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\": rpc error: code = NotFound desc = could not find container \"c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4\": container with ID starting with c439658572a0bd6a08e49e8bddd2d02cb3debe0ca4850911ffd589e39862cbc4 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.751699 4835 scope.go:117] "RemoveContainer" containerID="1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.752082 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc"} err="failed to get container status \"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\": rpc error: code = NotFound desc = could not find container \"1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc\": container with ID starting with 1449de2674e516bdbc4f68880301208e526ae8d923e146e72df13ddbcd6125dc not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.752131 4835 scope.go:117] "RemoveContainer" containerID="044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.752435 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84"} err="failed to get container status \"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\": rpc error: code = NotFound desc = could not find container \"044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84\": container with ID starting with 044fe741349bd64d8675f1e98ddf0d3106fc93171a9c60ca145c2c835fb7ac84 not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.752478 4835 scope.go:117] "RemoveContainer" containerID="03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.752721 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc"} err="failed to get container status \"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\": rpc error: code = NotFound desc = could not find container \"03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc\": container with ID starting with 03d205a800c10f3f8a380564ebca10727dc6a38f2f64675389ac7185193ebcdc not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.752742 4835 scope.go:117] "RemoveContainer" containerID="8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.752934 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc"} err="failed to get container status \"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\": rpc error: code = NotFound desc = could not find container \"8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc\": container with ID starting with 8b82ff1626dd34e9be2048081f6925d976574509dacd418405a2b6c0a1b3bbbc not found: ID does not exist" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.752951 4835 scope.go:117] "RemoveContainer" containerID="b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764" Feb 01 07:31:31 crc kubenswrapper[4835]: I0201 07:31:31.753101 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764"} err="failed to get container status \"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\": rpc error: code = NotFound desc = could not find container \"b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764\": container with ID starting with b76fe57810fc48ee9486aaaab54faa691f07ff6e21a493e72446e58f60b2d764 not found: ID does not exist" Feb 01 07:31:32 crc kubenswrapper[4835]: I0201 07:31:32.402191 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-25s9j_c9342eb7-b5ae-47b2-a56d-91ae886e5f0e/kube-multus/2.log" Feb 01 07:31:32 crc kubenswrapper[4835]: I0201 07:31:32.407832 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" event={"ID":"d5ddf04a-df44-470d-bed4-da3b619f9bf9","Type":"ContainerStarted","Data":"dbcbc256b0f131ca2b6c1cfbd3aeb5371427e85f319782dbf393cc93bb2fd2b0"} Feb 01 07:31:32 crc kubenswrapper[4835]: I0201 07:31:32.407889 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" event={"ID":"d5ddf04a-df44-470d-bed4-da3b619f9bf9","Type":"ContainerStarted","Data":"5db52c4bd2827eddeba3b09b09546d9709dd5e26cdac6cf311a8ec574d561439"} Feb 01 07:31:32 crc kubenswrapper[4835]: I0201 07:31:32.407904 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" event={"ID":"d5ddf04a-df44-470d-bed4-da3b619f9bf9","Type":"ContainerStarted","Data":"785f885832ef7f890099766870311dc6a1c4249a24ff3f7f2ee9f620842f97db"} Feb 01 07:31:32 crc kubenswrapper[4835]: I0201 07:31:32.407918 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" event={"ID":"d5ddf04a-df44-470d-bed4-da3b619f9bf9","Type":"ContainerStarted","Data":"829756ac4125eb06d368e1d99fa1ee2ed9484a5e3089823b4f99bacb2042fdd8"} Feb 01 07:31:32 crc kubenswrapper[4835]: I0201 07:31:32.407929 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" event={"ID":"d5ddf04a-df44-470d-bed4-da3b619f9bf9","Type":"ContainerStarted","Data":"992415a416ad4d1e420259ed20c17ccaec6977a78fb9806ab8897b66bb75a925"} Feb 01 07:31:32 crc kubenswrapper[4835]: I0201 07:31:32.407950 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" event={"ID":"d5ddf04a-df44-470d-bed4-da3b619f9bf9","Type":"ContainerStarted","Data":"8097a2ea0075ffa8d20b9fbea73d6acb0b6cc54d195f0443940f2c15f56ab527"} Feb 01 07:31:35 crc kubenswrapper[4835]: I0201 07:31:35.434492 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" event={"ID":"d5ddf04a-df44-470d-bed4-da3b619f9bf9","Type":"ContainerStarted","Data":"46207584995c980d15104b73ad0655c7051ed75ec1f99cd69ecfecfc739afab4"} Feb 01 07:31:37 crc kubenswrapper[4835]: I0201 07:31:37.451881 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" event={"ID":"d5ddf04a-df44-470d-bed4-da3b619f9bf9","Type":"ContainerStarted","Data":"1774ab46905b3b12b974f1840568e297f715ac691a55523ad947ef04623faaa9"} Feb 01 07:31:37 crc kubenswrapper[4835]: I0201 07:31:37.453386 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:37 crc kubenswrapper[4835]: I0201 07:31:37.453580 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:37 crc kubenswrapper[4835]: I0201 07:31:37.453740 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:37 crc kubenswrapper[4835]: I0201 07:31:37.485693 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:37 crc kubenswrapper[4835]: I0201 07:31:37.486072 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:31:37 crc kubenswrapper[4835]: I0201 07:31:37.495612 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" podStartSLOduration=7.49558774 podStartE2EDuration="7.49558774s" podCreationTimestamp="2026-02-01 07:31:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:31:37.490374581 +0000 UTC m=+570.610811065" watchObservedRunningTime="2026-02-01 07:31:37.49558774 +0000 UTC m=+570.616024204" Feb 01 07:31:43 crc kubenswrapper[4835]: I0201 07:31:43.567073 4835 scope.go:117] "RemoveContainer" containerID="bc898c375e02b77f5d0608257a9dc49631ac50c8ceab7e6be8a7327889f64c22" Feb 01 07:31:43 crc kubenswrapper[4835]: E0201 07:31:43.568041 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-25s9j_openshift-multus(c9342eb7-b5ae-47b2-a56d-91ae886e5f0e)\"" pod="openshift-multus/multus-25s9j" podUID="c9342eb7-b5ae-47b2-a56d-91ae886e5f0e" Feb 01 07:31:55 crc kubenswrapper[4835]: I0201 07:31:55.191938 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:31:55 crc kubenswrapper[4835]: I0201 07:31:55.192657 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.481716 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g"] Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.482949 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.486655 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.501249 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g"] Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.572169 4835 scope.go:117] "RemoveContainer" containerID="bc898c375e02b77f5d0608257a9dc49631ac50c8ceab7e6be8a7327889f64c22" Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.605458 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/042bee18-1826-42db-a17a-6f0e3d488c16-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g\" (UID: \"042bee18-1826-42db-a17a-6f0e3d488c16\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.605525 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwgjr\" (UniqueName: \"kubernetes.io/projected/042bee18-1826-42db-a17a-6f0e3d488c16-kube-api-access-kwgjr\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g\" (UID: \"042bee18-1826-42db-a17a-6f0e3d488c16\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.605632 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/042bee18-1826-42db-a17a-6f0e3d488c16-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g\" (UID: \"042bee18-1826-42db-a17a-6f0e3d488c16\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.707676 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/042bee18-1826-42db-a17a-6f0e3d488c16-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g\" (UID: \"042bee18-1826-42db-a17a-6f0e3d488c16\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.707895 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/042bee18-1826-42db-a17a-6f0e3d488c16-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g\" (UID: \"042bee18-1826-42db-a17a-6f0e3d488c16\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.707979 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwgjr\" (UniqueName: \"kubernetes.io/projected/042bee18-1826-42db-a17a-6f0e3d488c16-kube-api-access-kwgjr\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g\" (UID: \"042bee18-1826-42db-a17a-6f0e3d488c16\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.709644 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/042bee18-1826-42db-a17a-6f0e3d488c16-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g\" (UID: \"042bee18-1826-42db-a17a-6f0e3d488c16\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.709643 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/042bee18-1826-42db-a17a-6f0e3d488c16-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g\" (UID: \"042bee18-1826-42db-a17a-6f0e3d488c16\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.744402 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwgjr\" (UniqueName: \"kubernetes.io/projected/042bee18-1826-42db-a17a-6f0e3d488c16-kube-api-access-kwgjr\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g\" (UID: \"042bee18-1826-42db-a17a-6f0e3d488c16\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:57 crc kubenswrapper[4835]: I0201 07:31:57.809799 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:57 crc kubenswrapper[4835]: E0201 07:31:57.853257 4835 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_openshift-marketplace_042bee18-1826-42db-a17a-6f0e3d488c16_0(8c164d2b9babb42a33b20baec3bb11a79bea7669d08edffccc2ed4dd179c8b68): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 01 07:31:57 crc kubenswrapper[4835]: E0201 07:31:57.853351 4835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_openshift-marketplace_042bee18-1826-42db-a17a-6f0e3d488c16_0(8c164d2b9babb42a33b20baec3bb11a79bea7669d08edffccc2ed4dd179c8b68): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:57 crc kubenswrapper[4835]: E0201 07:31:57.853388 4835 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_openshift-marketplace_042bee18-1826-42db-a17a-6f0e3d488c16_0(8c164d2b9babb42a33b20baec3bb11a79bea7669d08edffccc2ed4dd179c8b68): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:57 crc kubenswrapper[4835]: E0201 07:31:57.853515 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_openshift-marketplace(042bee18-1826-42db-a17a-6f0e3d488c16)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_openshift-marketplace(042bee18-1826-42db-a17a-6f0e3d488c16)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_openshift-marketplace_042bee18-1826-42db-a17a-6f0e3d488c16_0(8c164d2b9babb42a33b20baec3bb11a79bea7669d08edffccc2ed4dd179c8b68): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" podUID="042bee18-1826-42db-a17a-6f0e3d488c16" Feb 01 07:31:58 crc kubenswrapper[4835]: I0201 07:31:58.592739 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-25s9j_c9342eb7-b5ae-47b2-a56d-91ae886e5f0e/kube-multus/2.log" Feb 01 07:31:58 crc kubenswrapper[4835]: I0201 07:31:58.593460 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:58 crc kubenswrapper[4835]: I0201 07:31:58.593485 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-25s9j" event={"ID":"c9342eb7-b5ae-47b2-a56d-91ae886e5f0e","Type":"ContainerStarted","Data":"0a2144e34183d71af06e054153405ef8fcb42063704ecebf25092a89df054ed9"} Feb 01 07:31:58 crc kubenswrapper[4835]: I0201 07:31:58.594038 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:58 crc kubenswrapper[4835]: E0201 07:31:58.634712 4835 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_openshift-marketplace_042bee18-1826-42db-a17a-6f0e3d488c16_0(86ee98b0a66bb1edcf7e0f987ca19d666ab87d4e2386933d95ee45d0b69b9e95): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 01 07:31:58 crc kubenswrapper[4835]: E0201 07:31:58.634820 4835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_openshift-marketplace_042bee18-1826-42db-a17a-6f0e3d488c16_0(86ee98b0a66bb1edcf7e0f987ca19d666ab87d4e2386933d95ee45d0b69b9e95): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:58 crc kubenswrapper[4835]: E0201 07:31:58.634871 4835 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_openshift-marketplace_042bee18-1826-42db-a17a-6f0e3d488c16_0(86ee98b0a66bb1edcf7e0f987ca19d666ab87d4e2386933d95ee45d0b69b9e95): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:31:58 crc kubenswrapper[4835]: E0201 07:31:58.634972 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_openshift-marketplace(042bee18-1826-42db-a17a-6f0e3d488c16)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_openshift-marketplace(042bee18-1826-42db-a17a-6f0e3d488c16)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_openshift-marketplace_042bee18-1826-42db-a17a-6f0e3d488c16_0(86ee98b0a66bb1edcf7e0f987ca19d666ab87d4e2386933d95ee45d0b69b9e95): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" podUID="042bee18-1826-42db-a17a-6f0e3d488c16" Feb 01 07:32:01 crc kubenswrapper[4835]: I0201 07:32:01.030016 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mdtv2" Feb 01 07:32:09 crc kubenswrapper[4835]: I0201 07:32:09.566471 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:32:09 crc kubenswrapper[4835]: I0201 07:32:09.567542 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:32:09 crc kubenswrapper[4835]: I0201 07:32:09.847838 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g"] Feb 01 07:32:10 crc kubenswrapper[4835]: I0201 07:32:10.677832 4835 generic.go:334] "Generic (PLEG): container finished" podID="042bee18-1826-42db-a17a-6f0e3d488c16" containerID="8062e953dae2e28ddc40103a506983b616b949425b693d1e6aa423ddac541f1b" exitCode=0 Feb 01 07:32:10 crc kubenswrapper[4835]: I0201 07:32:10.677963 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" event={"ID":"042bee18-1826-42db-a17a-6f0e3d488c16","Type":"ContainerDied","Data":"8062e953dae2e28ddc40103a506983b616b949425b693d1e6aa423ddac541f1b"} Feb 01 07:32:10 crc kubenswrapper[4835]: I0201 07:32:10.678022 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" event={"ID":"042bee18-1826-42db-a17a-6f0e3d488c16","Type":"ContainerStarted","Data":"a3c47286c8b6f7000a88e99dc173b33b188c1b566b7c054b5310da55f59ac601"} Feb 01 07:32:10 crc kubenswrapper[4835]: I0201 07:32:10.681660 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 01 07:32:12 crc kubenswrapper[4835]: I0201 07:32:12.695746 4835 generic.go:334] "Generic (PLEG): container finished" podID="042bee18-1826-42db-a17a-6f0e3d488c16" containerID="5f0f8ee150508c7ea5a48ce98e50eb6eee3f894d53028cc3bde304a59ac17ca5" exitCode=0 Feb 01 07:32:12 crc kubenswrapper[4835]: I0201 07:32:12.695812 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" event={"ID":"042bee18-1826-42db-a17a-6f0e3d488c16","Type":"ContainerDied","Data":"5f0f8ee150508c7ea5a48ce98e50eb6eee3f894d53028cc3bde304a59ac17ca5"} Feb 01 07:32:13 crc kubenswrapper[4835]: I0201 07:32:13.705131 4835 generic.go:334] "Generic (PLEG): container finished" podID="042bee18-1826-42db-a17a-6f0e3d488c16" containerID="b705539489355a2f4f704e6a327bf087ad33a74a620a6d9e0ac64ad131705044" exitCode=0 Feb 01 07:32:13 crc kubenswrapper[4835]: I0201 07:32:13.705314 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" event={"ID":"042bee18-1826-42db-a17a-6f0e3d488c16","Type":"ContainerDied","Data":"b705539489355a2f4f704e6a327bf087ad33a74a620a6d9e0ac64ad131705044"} Feb 01 07:32:15 crc kubenswrapper[4835]: I0201 07:32:15.010180 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:32:15 crc kubenswrapper[4835]: I0201 07:32:15.070084 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/042bee18-1826-42db-a17a-6f0e3d488c16-util\") pod \"042bee18-1826-42db-a17a-6f0e3d488c16\" (UID: \"042bee18-1826-42db-a17a-6f0e3d488c16\") " Feb 01 07:32:15 crc kubenswrapper[4835]: I0201 07:32:15.070157 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwgjr\" (UniqueName: \"kubernetes.io/projected/042bee18-1826-42db-a17a-6f0e3d488c16-kube-api-access-kwgjr\") pod \"042bee18-1826-42db-a17a-6f0e3d488c16\" (UID: \"042bee18-1826-42db-a17a-6f0e3d488c16\") " Feb 01 07:32:15 crc kubenswrapper[4835]: I0201 07:32:15.070309 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/042bee18-1826-42db-a17a-6f0e3d488c16-bundle\") pod \"042bee18-1826-42db-a17a-6f0e3d488c16\" (UID: \"042bee18-1826-42db-a17a-6f0e3d488c16\") " Feb 01 07:32:15 crc kubenswrapper[4835]: I0201 07:32:15.071920 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/042bee18-1826-42db-a17a-6f0e3d488c16-bundle" (OuterVolumeSpecName: "bundle") pod "042bee18-1826-42db-a17a-6f0e3d488c16" (UID: "042bee18-1826-42db-a17a-6f0e3d488c16"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:32:15 crc kubenswrapper[4835]: I0201 07:32:15.080588 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/042bee18-1826-42db-a17a-6f0e3d488c16-kube-api-access-kwgjr" (OuterVolumeSpecName: "kube-api-access-kwgjr") pod "042bee18-1826-42db-a17a-6f0e3d488c16" (UID: "042bee18-1826-42db-a17a-6f0e3d488c16"). InnerVolumeSpecName "kube-api-access-kwgjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:32:15 crc kubenswrapper[4835]: I0201 07:32:15.099657 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/042bee18-1826-42db-a17a-6f0e3d488c16-util" (OuterVolumeSpecName: "util") pod "042bee18-1826-42db-a17a-6f0e3d488c16" (UID: "042bee18-1826-42db-a17a-6f0e3d488c16"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:32:15 crc kubenswrapper[4835]: I0201 07:32:15.171587 4835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/042bee18-1826-42db-a17a-6f0e3d488c16-util\") on node \"crc\" DevicePath \"\"" Feb 01 07:32:15 crc kubenswrapper[4835]: I0201 07:32:15.171646 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwgjr\" (UniqueName: \"kubernetes.io/projected/042bee18-1826-42db-a17a-6f0e3d488c16-kube-api-access-kwgjr\") on node \"crc\" DevicePath \"\"" Feb 01 07:32:15 crc kubenswrapper[4835]: I0201 07:32:15.171672 4835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/042bee18-1826-42db-a17a-6f0e3d488c16-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:32:15 crc kubenswrapper[4835]: I0201 07:32:15.723008 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" event={"ID":"042bee18-1826-42db-a17a-6f0e3d488c16","Type":"ContainerDied","Data":"a3c47286c8b6f7000a88e99dc173b33b188c1b566b7c054b5310da55f59ac601"} Feb 01 07:32:15 crc kubenswrapper[4835]: I0201 07:32:15.723067 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3c47286c8b6f7000a88e99dc173b33b188c1b566b7c054b5310da55f59ac601" Feb 01 07:32:15 crc kubenswrapper[4835]: I0201 07:32:15.723086 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.191492 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.192002 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.192051 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.192673 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"377901096f8562233e3d8083b0c24e7e0a643028b79ddd39edcc7cb8ec54319f"} pod="openshift-machine-config-operator/machine-config-daemon-wdt78" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.192738 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" containerID="cri-o://377901096f8562233e3d8083b0c24e7e0a643028b79ddd39edcc7cb8ec54319f" gracePeriod=600 Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.805202 4835 generic.go:334] "Generic (PLEG): container finished" podID="303c450e-4b2d-4908-84e6-df8b444ed640" containerID="377901096f8562233e3d8083b0c24e7e0a643028b79ddd39edcc7cb8ec54319f" exitCode=0 Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.805275 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerDied","Data":"377901096f8562233e3d8083b0c24e7e0a643028b79ddd39edcc7cb8ec54319f"} Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.805564 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"6da4a09917e14a43c6af10d69dcc7ba3d2cd41146e8c294ea85744f0374d0efa"} Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.805592 4835 scope.go:117] "RemoveContainer" containerID="9e3104eb77be3b50140e525cdfbf7f55a456b28fd34136df6dc0b6920b3a97bf" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.809436 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h"] Feb 01 07:32:25 crc kubenswrapper[4835]: E0201 07:32:25.809626 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042bee18-1826-42db-a17a-6f0e3d488c16" containerName="extract" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.809641 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="042bee18-1826-42db-a17a-6f0e3d488c16" containerName="extract" Feb 01 07:32:25 crc kubenswrapper[4835]: E0201 07:32:25.809658 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042bee18-1826-42db-a17a-6f0e3d488c16" containerName="util" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.809664 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="042bee18-1826-42db-a17a-6f0e3d488c16" containerName="util" Feb 01 07:32:25 crc kubenswrapper[4835]: E0201 07:32:25.809673 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042bee18-1826-42db-a17a-6f0e3d488c16" containerName="pull" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.809680 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="042bee18-1826-42db-a17a-6f0e3d488c16" containerName="pull" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.809761 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="042bee18-1826-42db-a17a-6f0e3d488c16" containerName="extract" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.810103 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.813862 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-r8t9q" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.814026 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.814054 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.814050 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.815756 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.837681 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h"] Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.917649 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/91863ede-5184-40d2-8fba-1f65d6fdc785-webhook-cert\") pod \"metallb-operator-controller-manager-56dbb5cfb5-ls84h\" (UID: \"91863ede-5184-40d2-8fba-1f65d6fdc785\") " pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.917890 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/91863ede-5184-40d2-8fba-1f65d6fdc785-apiservice-cert\") pod \"metallb-operator-controller-manager-56dbb5cfb5-ls84h\" (UID: \"91863ede-5184-40d2-8fba-1f65d6fdc785\") " pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" Feb 01 07:32:25 crc kubenswrapper[4835]: I0201 07:32:25.917984 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxxms\" (UniqueName: \"kubernetes.io/projected/91863ede-5184-40d2-8fba-1f65d6fdc785-kube-api-access-lxxms\") pod \"metallb-operator-controller-manager-56dbb5cfb5-ls84h\" (UID: \"91863ede-5184-40d2-8fba-1f65d6fdc785\") " pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.018909 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/91863ede-5184-40d2-8fba-1f65d6fdc785-webhook-cert\") pod \"metallb-operator-controller-manager-56dbb5cfb5-ls84h\" (UID: \"91863ede-5184-40d2-8fba-1f65d6fdc785\") " pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.018962 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/91863ede-5184-40d2-8fba-1f65d6fdc785-apiservice-cert\") pod \"metallb-operator-controller-manager-56dbb5cfb5-ls84h\" (UID: \"91863ede-5184-40d2-8fba-1f65d6fdc785\") " pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.018983 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxxms\" (UniqueName: \"kubernetes.io/projected/91863ede-5184-40d2-8fba-1f65d6fdc785-kube-api-access-lxxms\") pod \"metallb-operator-controller-manager-56dbb5cfb5-ls84h\" (UID: \"91863ede-5184-40d2-8fba-1f65d6fdc785\") " pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.025625 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/91863ede-5184-40d2-8fba-1f65d6fdc785-apiservice-cert\") pod \"metallb-operator-controller-manager-56dbb5cfb5-ls84h\" (UID: \"91863ede-5184-40d2-8fba-1f65d6fdc785\") " pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.026103 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/91863ede-5184-40d2-8fba-1f65d6fdc785-webhook-cert\") pod \"metallb-operator-controller-manager-56dbb5cfb5-ls84h\" (UID: \"91863ede-5184-40d2-8fba-1f65d6fdc785\") " pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.035273 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxxms\" (UniqueName: \"kubernetes.io/projected/91863ede-5184-40d2-8fba-1f65d6fdc785-kube-api-access-lxxms\") pod \"metallb-operator-controller-manager-56dbb5cfb5-ls84h\" (UID: \"91863ede-5184-40d2-8fba-1f65d6fdc785\") " pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.124642 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.155802 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr"] Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.156978 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.158590 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.158864 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-6jkfp" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.158968 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.173639 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr"] Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.221151 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2ca8e92-ef3f-442a-830f-0e3c37d76087-apiservice-cert\") pod \"metallb-operator-webhook-server-58b8447d8-56lmr\" (UID: \"c2ca8e92-ef3f-442a-830f-0e3c37d76087\") " pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.221220 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c2ca8e92-ef3f-442a-830f-0e3c37d76087-webhook-cert\") pod \"metallb-operator-webhook-server-58b8447d8-56lmr\" (UID: \"c2ca8e92-ef3f-442a-830f-0e3c37d76087\") " pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.221248 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flf84\" (UniqueName: \"kubernetes.io/projected/c2ca8e92-ef3f-442a-830f-0e3c37d76087-kube-api-access-flf84\") pod \"metallb-operator-webhook-server-58b8447d8-56lmr\" (UID: \"c2ca8e92-ef3f-442a-830f-0e3c37d76087\") " pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.322473 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c2ca8e92-ef3f-442a-830f-0e3c37d76087-webhook-cert\") pod \"metallb-operator-webhook-server-58b8447d8-56lmr\" (UID: \"c2ca8e92-ef3f-442a-830f-0e3c37d76087\") " pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.322805 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flf84\" (UniqueName: \"kubernetes.io/projected/c2ca8e92-ef3f-442a-830f-0e3c37d76087-kube-api-access-flf84\") pod \"metallb-operator-webhook-server-58b8447d8-56lmr\" (UID: \"c2ca8e92-ef3f-442a-830f-0e3c37d76087\") " pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.322868 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2ca8e92-ef3f-442a-830f-0e3c37d76087-apiservice-cert\") pod \"metallb-operator-webhook-server-58b8447d8-56lmr\" (UID: \"c2ca8e92-ef3f-442a-830f-0e3c37d76087\") " pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.339167 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c2ca8e92-ef3f-442a-830f-0e3c37d76087-webhook-cert\") pod \"metallb-operator-webhook-server-58b8447d8-56lmr\" (UID: \"c2ca8e92-ef3f-442a-830f-0e3c37d76087\") " pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.348005 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2ca8e92-ef3f-442a-830f-0e3c37d76087-apiservice-cert\") pod \"metallb-operator-webhook-server-58b8447d8-56lmr\" (UID: \"c2ca8e92-ef3f-442a-830f-0e3c37d76087\") " pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.352270 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flf84\" (UniqueName: \"kubernetes.io/projected/c2ca8e92-ef3f-442a-830f-0e3c37d76087-kube-api-access-flf84\") pod \"metallb-operator-webhook-server-58b8447d8-56lmr\" (UID: \"c2ca8e92-ef3f-442a-830f-0e3c37d76087\") " pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.392075 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h"] Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.480512 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.722119 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr"] Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.812056 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" event={"ID":"91863ede-5184-40d2-8fba-1f65d6fdc785","Type":"ContainerStarted","Data":"749e11ec6a06ab913e302bac7c95c2bd78a90ba2132a58a5f523b3faeed645ed"} Feb 01 07:32:26 crc kubenswrapper[4835]: I0201 07:32:26.813559 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" event={"ID":"c2ca8e92-ef3f-442a-830f-0e3c37d76087","Type":"ContainerStarted","Data":"0405ca3062806c7e1e799311c5ac6630b116cf60c9877cf71dea5a7c2a963084"} Feb 01 07:32:31 crc kubenswrapper[4835]: I0201 07:32:31.849500 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" event={"ID":"91863ede-5184-40d2-8fba-1f65d6fdc785","Type":"ContainerStarted","Data":"de0a34f9af6fb1363b01e39d80935961f1b0d06a629c554a2510d72a174cf948"} Feb 01 07:32:31 crc kubenswrapper[4835]: I0201 07:32:31.850265 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" Feb 01 07:32:31 crc kubenswrapper[4835]: I0201 07:32:31.851728 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" event={"ID":"c2ca8e92-ef3f-442a-830f-0e3c37d76087","Type":"ContainerStarted","Data":"aea9915ceaf5281d79bf1513c8113c0b5e034a909f0658df3b2b5f50721bc21d"} Feb 01 07:32:31 crc kubenswrapper[4835]: I0201 07:32:31.851914 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" Feb 01 07:32:31 crc kubenswrapper[4835]: I0201 07:32:31.883634 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" podStartSLOduration=2.626444964 podStartE2EDuration="6.883615391s" podCreationTimestamp="2026-02-01 07:32:25 +0000 UTC" firstStartedPulling="2026-02-01 07:32:26.402546891 +0000 UTC m=+619.522983325" lastFinishedPulling="2026-02-01 07:32:30.659717318 +0000 UTC m=+623.780153752" observedRunningTime="2026-02-01 07:32:31.879577152 +0000 UTC m=+625.000013606" watchObservedRunningTime="2026-02-01 07:32:31.883615391 +0000 UTC m=+625.004051835" Feb 01 07:32:31 crc kubenswrapper[4835]: I0201 07:32:31.913065 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" podStartSLOduration=1.912356607 podStartE2EDuration="5.913036121s" podCreationTimestamp="2026-02-01 07:32:26 +0000 UTC" firstStartedPulling="2026-02-01 07:32:26.7303108 +0000 UTC m=+619.850747234" lastFinishedPulling="2026-02-01 07:32:30.730990314 +0000 UTC m=+623.851426748" observedRunningTime="2026-02-01 07:32:31.907153361 +0000 UTC m=+625.027589795" watchObservedRunningTime="2026-02-01 07:32:31.913036121 +0000 UTC m=+625.033472595" Feb 01 07:32:46 crc kubenswrapper[4835]: I0201 07:32:46.486202 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-58b8447d8-56lmr" Feb 01 07:33:06 crc kubenswrapper[4835]: I0201 07:33:06.128291 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-56dbb5cfb5-ls84h" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.018398 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd"] Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.019501 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.021763 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.021825 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-8q2cn" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.023602 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-9qwwp"] Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.026172 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.031810 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.037743 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.041512 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd"] Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.145703 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-8s85p"] Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.146476 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8s85p" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.148743 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.148788 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.149136 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-slwrd" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.149360 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.167904 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5c427241-76d6-4772-9a78-74952bdbf29f-metrics\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.167984 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rd96\" (UniqueName: \"kubernetes.io/projected/5c427241-76d6-4772-9a78-74952bdbf29f-kube-api-access-7rd96\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.168007 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5c427241-76d6-4772-9a78-74952bdbf29f-frr-startup\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.168028 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5c427241-76d6-4772-9a78-74952bdbf29f-reloader\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.168045 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5c427241-76d6-4772-9a78-74952bdbf29f-frr-sockets\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.168065 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx8kd\" (UniqueName: \"kubernetes.io/projected/e60f3db5-acc8-404c-a98c-6e6bfb05d6e9-kube-api-access-fx8kd\") pod \"frr-k8s-webhook-server-7df86c4f6c-7ldwd\" (UID: \"e60f3db5-acc8-404c-a98c-6e6bfb05d6e9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.168139 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e60f3db5-acc8-404c-a98c-6e6bfb05d6e9-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-7ldwd\" (UID: \"e60f3db5-acc8-404c-a98c-6e6bfb05d6e9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.168161 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c427241-76d6-4772-9a78-74952bdbf29f-metrics-certs\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.168211 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5c427241-76d6-4772-9a78-74952bdbf29f-frr-conf\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.176229 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-6qvjg"] Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.177183 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-6qvjg" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.178795 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.204032 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-6qvjg"] Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269237 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5c427241-76d6-4772-9a78-74952bdbf29f-metrics\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269290 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rd96\" (UniqueName: \"kubernetes.io/projected/5c427241-76d6-4772-9a78-74952bdbf29f-kube-api-access-7rd96\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269315 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5c427241-76d6-4772-9a78-74952bdbf29f-frr-startup\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269351 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgcsp\" (UniqueName: \"kubernetes.io/projected/0975cec6-f6ff-4188-9435-864a46ad1740-kube-api-access-dgcsp\") pod \"speaker-8s85p\" (UID: \"0975cec6-f6ff-4188-9435-864a46ad1740\") " pod="metallb-system/speaker-8s85p" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269380 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5c427241-76d6-4772-9a78-74952bdbf29f-reloader\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269403 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86105024-7ff9-4d38-9333-c7c7b241a5c5-cert\") pod \"controller-6968d8fdc4-6qvjg\" (UID: \"86105024-7ff9-4d38-9333-c7c7b241a5c5\") " pod="metallb-system/controller-6968d8fdc4-6qvjg" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269450 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5c427241-76d6-4772-9a78-74952bdbf29f-frr-sockets\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269479 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx8kd\" (UniqueName: \"kubernetes.io/projected/e60f3db5-acc8-404c-a98c-6e6bfb05d6e9-kube-api-access-fx8kd\") pod \"frr-k8s-webhook-server-7df86c4f6c-7ldwd\" (UID: \"e60f3db5-acc8-404c-a98c-6e6bfb05d6e9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269506 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86105024-7ff9-4d38-9333-c7c7b241a5c5-metrics-certs\") pod \"controller-6968d8fdc4-6qvjg\" (UID: \"86105024-7ff9-4d38-9333-c7c7b241a5c5\") " pod="metallb-system/controller-6968d8fdc4-6qvjg" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269531 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e60f3db5-acc8-404c-a98c-6e6bfb05d6e9-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-7ldwd\" (UID: \"e60f3db5-acc8-404c-a98c-6e6bfb05d6e9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269566 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c427241-76d6-4772-9a78-74952bdbf29f-metrics-certs\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269592 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0975cec6-f6ff-4188-9435-864a46ad1740-memberlist\") pod \"speaker-8s85p\" (UID: \"0975cec6-f6ff-4188-9435-864a46ad1740\") " pod="metallb-system/speaker-8s85p" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269628 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5c427241-76d6-4772-9a78-74952bdbf29f-frr-conf\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269656 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxzk8\" (UniqueName: \"kubernetes.io/projected/86105024-7ff9-4d38-9333-c7c7b241a5c5-kube-api-access-pxzk8\") pod \"controller-6968d8fdc4-6qvjg\" (UID: \"86105024-7ff9-4d38-9333-c7c7b241a5c5\") " pod="metallb-system/controller-6968d8fdc4-6qvjg" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269669 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5c427241-76d6-4772-9a78-74952bdbf29f-metrics\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269679 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0975cec6-f6ff-4188-9435-864a46ad1740-metallb-excludel2\") pod \"speaker-8s85p\" (UID: \"0975cec6-f6ff-4188-9435-864a46ad1740\") " pod="metallb-system/speaker-8s85p" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269730 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0975cec6-f6ff-4188-9435-864a46ad1740-metrics-certs\") pod \"speaker-8s85p\" (UID: \"0975cec6-f6ff-4188-9435-864a46ad1740\") " pod="metallb-system/speaker-8s85p" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.269843 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5c427241-76d6-4772-9a78-74952bdbf29f-frr-sockets\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.270622 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5c427241-76d6-4772-9a78-74952bdbf29f-frr-startup\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.270719 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5c427241-76d6-4772-9a78-74952bdbf29f-reloader\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.270905 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5c427241-76d6-4772-9a78-74952bdbf29f-frr-conf\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.276994 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e60f3db5-acc8-404c-a98c-6e6bfb05d6e9-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-7ldwd\" (UID: \"e60f3db5-acc8-404c-a98c-6e6bfb05d6e9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.284221 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c427241-76d6-4772-9a78-74952bdbf29f-metrics-certs\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.288229 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rd96\" (UniqueName: \"kubernetes.io/projected/5c427241-76d6-4772-9a78-74952bdbf29f-kube-api-access-7rd96\") pod \"frr-k8s-9qwwp\" (UID: \"5c427241-76d6-4772-9a78-74952bdbf29f\") " pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.292942 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx8kd\" (UniqueName: \"kubernetes.io/projected/e60f3db5-acc8-404c-a98c-6e6bfb05d6e9-kube-api-access-fx8kd\") pod \"frr-k8s-webhook-server-7df86c4f6c-7ldwd\" (UID: \"e60f3db5-acc8-404c-a98c-6e6bfb05d6e9\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.334851 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.342020 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.371389 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0975cec6-f6ff-4188-9435-864a46ad1740-memberlist\") pod \"speaker-8s85p\" (UID: \"0975cec6-f6ff-4188-9435-864a46ad1740\") " pod="metallb-system/speaker-8s85p" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.371484 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxzk8\" (UniqueName: \"kubernetes.io/projected/86105024-7ff9-4d38-9333-c7c7b241a5c5-kube-api-access-pxzk8\") pod \"controller-6968d8fdc4-6qvjg\" (UID: \"86105024-7ff9-4d38-9333-c7c7b241a5c5\") " pod="metallb-system/controller-6968d8fdc4-6qvjg" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.371507 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0975cec6-f6ff-4188-9435-864a46ad1740-metallb-excludel2\") pod \"speaker-8s85p\" (UID: \"0975cec6-f6ff-4188-9435-864a46ad1740\") " pod="metallb-system/speaker-8s85p" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.371527 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0975cec6-f6ff-4188-9435-864a46ad1740-metrics-certs\") pod \"speaker-8s85p\" (UID: \"0975cec6-f6ff-4188-9435-864a46ad1740\") " pod="metallb-system/speaker-8s85p" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.371577 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgcsp\" (UniqueName: \"kubernetes.io/projected/0975cec6-f6ff-4188-9435-864a46ad1740-kube-api-access-dgcsp\") pod \"speaker-8s85p\" (UID: \"0975cec6-f6ff-4188-9435-864a46ad1740\") " pod="metallb-system/speaker-8s85p" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.371597 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86105024-7ff9-4d38-9333-c7c7b241a5c5-cert\") pod \"controller-6968d8fdc4-6qvjg\" (UID: \"86105024-7ff9-4d38-9333-c7c7b241a5c5\") " pod="metallb-system/controller-6968d8fdc4-6qvjg" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.371634 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86105024-7ff9-4d38-9333-c7c7b241a5c5-metrics-certs\") pod \"controller-6968d8fdc4-6qvjg\" (UID: \"86105024-7ff9-4d38-9333-c7c7b241a5c5\") " pod="metallb-system/controller-6968d8fdc4-6qvjg" Feb 01 07:33:07 crc kubenswrapper[4835]: E0201 07:33:07.373165 4835 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 01 07:33:07 crc kubenswrapper[4835]: E0201 07:33:07.373272 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0975cec6-f6ff-4188-9435-864a46ad1740-memberlist podName:0975cec6-f6ff-4188-9435-864a46ad1740 nodeName:}" failed. No retries permitted until 2026-02-01 07:33:07.873239695 +0000 UTC m=+660.993676299 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/0975cec6-f6ff-4188-9435-864a46ad1740-memberlist") pod "speaker-8s85p" (UID: "0975cec6-f6ff-4188-9435-864a46ad1740") : secret "metallb-memberlist" not found Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.373525 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0975cec6-f6ff-4188-9435-864a46ad1740-metallb-excludel2\") pod \"speaker-8s85p\" (UID: \"0975cec6-f6ff-4188-9435-864a46ad1740\") " pod="metallb-system/speaker-8s85p" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.377630 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86105024-7ff9-4d38-9333-c7c7b241a5c5-metrics-certs\") pod \"controller-6968d8fdc4-6qvjg\" (UID: \"86105024-7ff9-4d38-9333-c7c7b241a5c5\") " pod="metallb-system/controller-6968d8fdc4-6qvjg" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.378516 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0975cec6-f6ff-4188-9435-864a46ad1740-metrics-certs\") pod \"speaker-8s85p\" (UID: \"0975cec6-f6ff-4188-9435-864a46ad1740\") " pod="metallb-system/speaker-8s85p" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.378635 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86105024-7ff9-4d38-9333-c7c7b241a5c5-cert\") pod \"controller-6968d8fdc4-6qvjg\" (UID: \"86105024-7ff9-4d38-9333-c7c7b241a5c5\") " pod="metallb-system/controller-6968d8fdc4-6qvjg" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.399994 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgcsp\" (UniqueName: \"kubernetes.io/projected/0975cec6-f6ff-4188-9435-864a46ad1740-kube-api-access-dgcsp\") pod \"speaker-8s85p\" (UID: \"0975cec6-f6ff-4188-9435-864a46ad1740\") " pod="metallb-system/speaker-8s85p" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.404819 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxzk8\" (UniqueName: \"kubernetes.io/projected/86105024-7ff9-4d38-9333-c7c7b241a5c5-kube-api-access-pxzk8\") pod \"controller-6968d8fdc4-6qvjg\" (UID: \"86105024-7ff9-4d38-9333-c7c7b241a5c5\") " pod="metallb-system/controller-6968d8fdc4-6qvjg" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.487900 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-6qvjg" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.666436 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-6qvjg"] Feb 01 07:33:07 crc kubenswrapper[4835]: W0201 07:33:07.669823 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86105024_7ff9_4d38_9333_c7c7b241a5c5.slice/crio-297023993fd8efda54f32b5742c11a160069b98d2c71e82c938e7f26e3c8154b WatchSource:0}: Error finding container 297023993fd8efda54f32b5742c11a160069b98d2c71e82c938e7f26e3c8154b: Status 404 returned error can't find the container with id 297023993fd8efda54f32b5742c11a160069b98d2c71e82c938e7f26e3c8154b Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.763783 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd"] Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.880675 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0975cec6-f6ff-4188-9435-864a46ad1740-memberlist\") pod \"speaker-8s85p\" (UID: \"0975cec6-f6ff-4188-9435-864a46ad1740\") " pod="metallb-system/speaker-8s85p" Feb 01 07:33:07 crc kubenswrapper[4835]: I0201 07:33:07.886427 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0975cec6-f6ff-4188-9435-864a46ad1740-memberlist\") pod \"speaker-8s85p\" (UID: \"0975cec6-f6ff-4188-9435-864a46ad1740\") " pod="metallb-system/speaker-8s85p" Feb 01 07:33:08 crc kubenswrapper[4835]: I0201 07:33:08.059457 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-slwrd" Feb 01 07:33:08 crc kubenswrapper[4835]: I0201 07:33:08.068039 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-8s85p" Feb 01 07:33:08 crc kubenswrapper[4835]: W0201 07:33:08.090901 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0975cec6_f6ff_4188_9435_864a46ad1740.slice/crio-4a7beeec794c6c028445dfe36a16c3e43c4745798f4a72bdf0b49d0af3ca7cb0 WatchSource:0}: Error finding container 4a7beeec794c6c028445dfe36a16c3e43c4745798f4a72bdf0b49d0af3ca7cb0: Status 404 returned error can't find the container with id 4a7beeec794c6c028445dfe36a16c3e43c4745798f4a72bdf0b49d0af3ca7cb0 Feb 01 07:33:08 crc kubenswrapper[4835]: I0201 07:33:08.094631 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd" event={"ID":"e60f3db5-acc8-404c-a98c-6e6bfb05d6e9","Type":"ContainerStarted","Data":"9d9385f2e885cffa4fe19e9729f9bc25f27390211bcfdfecc7fd90c06f3b8303"} Feb 01 07:33:08 crc kubenswrapper[4835]: I0201 07:33:08.097903 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9qwwp" event={"ID":"5c427241-76d6-4772-9a78-74952bdbf29f","Type":"ContainerStarted","Data":"f6f645f3d29c3449925d625d4603c7ea0521e86dedf703651a60b4af08826d92"} Feb 01 07:33:08 crc kubenswrapper[4835]: I0201 07:33:08.099523 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-6qvjg" event={"ID":"86105024-7ff9-4d38-9333-c7c7b241a5c5","Type":"ContainerStarted","Data":"b53313c7fa50e96458d12a9e819206063efa1fe75cbc78cbc89f17303f5db3e3"} Feb 01 07:33:08 crc kubenswrapper[4835]: I0201 07:33:08.099576 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-6qvjg" event={"ID":"86105024-7ff9-4d38-9333-c7c7b241a5c5","Type":"ContainerStarted","Data":"297023993fd8efda54f32b5742c11a160069b98d2c71e82c938e7f26e3c8154b"} Feb 01 07:33:09 crc kubenswrapper[4835]: I0201 07:33:09.109185 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8s85p" event={"ID":"0975cec6-f6ff-4188-9435-864a46ad1740","Type":"ContainerStarted","Data":"b57eb59308467a79265de6ed788edb65d0142c6ead6841ade0aebb1e017c44f2"} Feb 01 07:33:09 crc kubenswrapper[4835]: I0201 07:33:09.109450 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8s85p" event={"ID":"0975cec6-f6ff-4188-9435-864a46ad1740","Type":"ContainerStarted","Data":"4a7beeec794c6c028445dfe36a16c3e43c4745798f4a72bdf0b49d0af3ca7cb0"} Feb 01 07:33:11 crc kubenswrapper[4835]: I0201 07:33:11.127244 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-6qvjg" event={"ID":"86105024-7ff9-4d38-9333-c7c7b241a5c5","Type":"ContainerStarted","Data":"8148c97c2d1729a4cfd2254f790d50ec52fd9652729b74e77be590c2d57dd1f3"} Feb 01 07:33:11 crc kubenswrapper[4835]: I0201 07:33:11.127584 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-6qvjg" Feb 01 07:33:11 crc kubenswrapper[4835]: I0201 07:33:11.143914 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-6qvjg" podStartSLOduration=0.968947836 podStartE2EDuration="4.143901369s" podCreationTimestamp="2026-02-01 07:33:07 +0000 UTC" firstStartedPulling="2026-02-01 07:33:07.788020905 +0000 UTC m=+660.908457339" lastFinishedPulling="2026-02-01 07:33:10.962974448 +0000 UTC m=+664.083410872" observedRunningTime="2026-02-01 07:33:11.142922663 +0000 UTC m=+664.263359097" watchObservedRunningTime="2026-02-01 07:33:11.143901369 +0000 UTC m=+664.264337803" Feb 01 07:33:12 crc kubenswrapper[4835]: I0201 07:33:12.135541 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-8s85p" event={"ID":"0975cec6-f6ff-4188-9435-864a46ad1740","Type":"ContainerStarted","Data":"79e951890140423fbd8b935f2b9b4f36f5756e0fb078824c64f854a64728379a"} Feb 01 07:33:12 crc kubenswrapper[4835]: I0201 07:33:12.157245 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-8s85p" podStartSLOduration=2.587147172 podStartE2EDuration="5.157221146s" podCreationTimestamp="2026-02-01 07:33:07 +0000 UTC" firstStartedPulling="2026-02-01 07:33:08.407783328 +0000 UTC m=+661.528219762" lastFinishedPulling="2026-02-01 07:33:10.977857302 +0000 UTC m=+664.098293736" observedRunningTime="2026-02-01 07:33:12.154858992 +0000 UTC m=+665.275295446" watchObservedRunningTime="2026-02-01 07:33:12.157221146 +0000 UTC m=+665.277657620" Feb 01 07:33:13 crc kubenswrapper[4835]: I0201 07:33:13.145063 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-8s85p" Feb 01 07:33:15 crc kubenswrapper[4835]: I0201 07:33:15.161388 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd" event={"ID":"e60f3db5-acc8-404c-a98c-6e6bfb05d6e9","Type":"ContainerStarted","Data":"cebedd70c405f420fec8624fd4c4e8d3a8b0db318d764ee54418869a78b7f5e4"} Feb 01 07:33:15 crc kubenswrapper[4835]: I0201 07:33:15.161784 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd" Feb 01 07:33:15 crc kubenswrapper[4835]: I0201 07:33:15.164851 4835 generic.go:334] "Generic (PLEG): container finished" podID="5c427241-76d6-4772-9a78-74952bdbf29f" containerID="a19737494a47edebf6868e3c306147e7c20c29f6a80c67a14d325fc4f60be064" exitCode=0 Feb 01 07:33:15 crc kubenswrapper[4835]: I0201 07:33:15.164885 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9qwwp" event={"ID":"5c427241-76d6-4772-9a78-74952bdbf29f","Type":"ContainerDied","Data":"a19737494a47edebf6868e3c306147e7c20c29f6a80c67a14d325fc4f60be064"} Feb 01 07:33:15 crc kubenswrapper[4835]: I0201 07:33:15.180939 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd" podStartSLOduration=2.039366727 podStartE2EDuration="9.180902764s" podCreationTimestamp="2026-02-01 07:33:06 +0000 UTC" firstStartedPulling="2026-02-01 07:33:07.772099622 +0000 UTC m=+660.892536066" lastFinishedPulling="2026-02-01 07:33:14.913635669 +0000 UTC m=+668.034072103" observedRunningTime="2026-02-01 07:33:15.174618623 +0000 UTC m=+668.295055087" watchObservedRunningTime="2026-02-01 07:33:15.180902764 +0000 UTC m=+668.301339198" Feb 01 07:33:16 crc kubenswrapper[4835]: I0201 07:33:16.175104 4835 generic.go:334] "Generic (PLEG): container finished" podID="5c427241-76d6-4772-9a78-74952bdbf29f" containerID="b6d97e1daf21a6d088e16a3e02a68d02b41da503e6a62e7f75630fb90021aed6" exitCode=0 Feb 01 07:33:16 crc kubenswrapper[4835]: I0201 07:33:16.175179 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9qwwp" event={"ID":"5c427241-76d6-4772-9a78-74952bdbf29f","Type":"ContainerDied","Data":"b6d97e1daf21a6d088e16a3e02a68d02b41da503e6a62e7f75630fb90021aed6"} Feb 01 07:33:17 crc kubenswrapper[4835]: I0201 07:33:17.187606 4835 generic.go:334] "Generic (PLEG): container finished" podID="5c427241-76d6-4772-9a78-74952bdbf29f" containerID="d547d804da139e781b2c16d8b6d467b3b9a1ef30ed3fe075c1449461552996f7" exitCode=0 Feb 01 07:33:17 crc kubenswrapper[4835]: I0201 07:33:17.188973 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9qwwp" event={"ID":"5c427241-76d6-4772-9a78-74952bdbf29f","Type":"ContainerDied","Data":"d547d804da139e781b2c16d8b6d467b3b9a1ef30ed3fe075c1449461552996f7"} Feb 01 07:33:17 crc kubenswrapper[4835]: I0201 07:33:17.493032 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-6qvjg" Feb 01 07:33:18 crc kubenswrapper[4835]: I0201 07:33:18.075226 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-8s85p" Feb 01 07:33:18 crc kubenswrapper[4835]: I0201 07:33:18.206658 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9qwwp" event={"ID":"5c427241-76d6-4772-9a78-74952bdbf29f","Type":"ContainerStarted","Data":"a2438333ce775d050aad366fecaf228466e901d4e42424978fab513975eadf3e"} Feb 01 07:33:18 crc kubenswrapper[4835]: I0201 07:33:18.206723 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9qwwp" event={"ID":"5c427241-76d6-4772-9a78-74952bdbf29f","Type":"ContainerStarted","Data":"daa16a6f56699489a3a1b1ebf3bd69cc828b106e8ee3107596d2d703c3092557"} Feb 01 07:33:18 crc kubenswrapper[4835]: I0201 07:33:18.206742 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9qwwp" event={"ID":"5c427241-76d6-4772-9a78-74952bdbf29f","Type":"ContainerStarted","Data":"66c3b1765c8a2e71ac7865f7dc096a6cad463f913d5e8a1200cb9427588e60f0"} Feb 01 07:33:18 crc kubenswrapper[4835]: I0201 07:33:18.206761 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9qwwp" event={"ID":"5c427241-76d6-4772-9a78-74952bdbf29f","Type":"ContainerStarted","Data":"fe2cb13a282b5bb779c06aac825d62a4c75423592a3808b0bd9214dfd07f7a25"} Feb 01 07:33:18 crc kubenswrapper[4835]: I0201 07:33:18.206779 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9qwwp" event={"ID":"5c427241-76d6-4772-9a78-74952bdbf29f","Type":"ContainerStarted","Data":"43d75cd2de4c4faec6c988b23db3ccd12b8a8a40e5647b179769b83777565381"} Feb 01 07:33:19 crc kubenswrapper[4835]: I0201 07:33:19.223481 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9qwwp" event={"ID":"5c427241-76d6-4772-9a78-74952bdbf29f","Type":"ContainerStarted","Data":"687956705a3f872719e9b399d17c14bb3f91db40d7ee0a52ad4e94a4fdb8f033"} Feb 01 07:33:19 crc kubenswrapper[4835]: I0201 07:33:19.223809 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:19 crc kubenswrapper[4835]: I0201 07:33:19.266354 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-9qwwp" podStartSLOduration=5.909598306 podStartE2EDuration="13.266321253s" podCreationTimestamp="2026-02-01 07:33:06 +0000 UTC" firstStartedPulling="2026-02-01 07:33:07.52075569 +0000 UTC m=+660.641192144" lastFinishedPulling="2026-02-01 07:33:14.877478647 +0000 UTC m=+667.997915091" observedRunningTime="2026-02-01 07:33:19.258290945 +0000 UTC m=+672.378727469" watchObservedRunningTime="2026-02-01 07:33:19.266321253 +0000 UTC m=+672.386757757" Feb 01 07:33:22 crc kubenswrapper[4835]: I0201 07:33:22.343105 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:22 crc kubenswrapper[4835]: I0201 07:33:22.375267 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:23 crc kubenswrapper[4835]: I0201 07:33:23.677471 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-vzd94"] Feb 01 07:33:23 crc kubenswrapper[4835]: I0201 07:33:23.678634 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-vzd94" Feb 01 07:33:23 crc kubenswrapper[4835]: I0201 07:33:23.684048 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 01 07:33:23 crc kubenswrapper[4835]: I0201 07:33:23.685312 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-index-dockercfg-wcczr" Feb 01 07:33:23 crc kubenswrapper[4835]: I0201 07:33:23.685310 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 01 07:33:23 crc kubenswrapper[4835]: I0201 07:33:23.690035 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-vzd94"] Feb 01 07:33:23 crc kubenswrapper[4835]: I0201 07:33:23.775987 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfndf\" (UniqueName: \"kubernetes.io/projected/61daec47-a8bc-4ead-9f76-fcf5fca43147-kube-api-access-rfndf\") pod \"mariadb-operator-index-vzd94\" (UID: \"61daec47-a8bc-4ead-9f76-fcf5fca43147\") " pod="openstack-operators/mariadb-operator-index-vzd94" Feb 01 07:33:23 crc kubenswrapper[4835]: I0201 07:33:23.877508 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfndf\" (UniqueName: \"kubernetes.io/projected/61daec47-a8bc-4ead-9f76-fcf5fca43147-kube-api-access-rfndf\") pod \"mariadb-operator-index-vzd94\" (UID: \"61daec47-a8bc-4ead-9f76-fcf5fca43147\") " pod="openstack-operators/mariadb-operator-index-vzd94" Feb 01 07:33:23 crc kubenswrapper[4835]: I0201 07:33:23.899404 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfndf\" (UniqueName: \"kubernetes.io/projected/61daec47-a8bc-4ead-9f76-fcf5fca43147-kube-api-access-rfndf\") pod \"mariadb-operator-index-vzd94\" (UID: \"61daec47-a8bc-4ead-9f76-fcf5fca43147\") " pod="openstack-operators/mariadb-operator-index-vzd94" Feb 01 07:33:24 crc kubenswrapper[4835]: I0201 07:33:24.010142 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-vzd94" Feb 01 07:33:24 crc kubenswrapper[4835]: I0201 07:33:24.518110 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-vzd94"] Feb 01 07:33:24 crc kubenswrapper[4835]: W0201 07:33:24.527744 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61daec47_a8bc_4ead_9f76_fcf5fca43147.slice/crio-e228621a4370f2d9433fb96f0949b6e19537d374c81dc6d6a0e3bc6eb82fd894 WatchSource:0}: Error finding container e228621a4370f2d9433fb96f0949b6e19537d374c81dc6d6a0e3bc6eb82fd894: Status 404 returned error can't find the container with id e228621a4370f2d9433fb96f0949b6e19537d374c81dc6d6a0e3bc6eb82fd894 Feb 01 07:33:25 crc kubenswrapper[4835]: I0201 07:33:25.274776 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-vzd94" event={"ID":"61daec47-a8bc-4ead-9f76-fcf5fca43147","Type":"ContainerStarted","Data":"e228621a4370f2d9433fb96f0949b6e19537d374c81dc6d6a0e3bc6eb82fd894"} Feb 01 07:33:26 crc kubenswrapper[4835]: I0201 07:33:26.288556 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-vzd94" event={"ID":"61daec47-a8bc-4ead-9f76-fcf5fca43147","Type":"ContainerStarted","Data":"7fe581ac06c8d9bbe9a6da9878e81a9db49b53afc871b3268c9a15241b9c7d55"} Feb 01 07:33:26 crc kubenswrapper[4835]: I0201 07:33:26.317863 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-vzd94" podStartSLOduration=2.319151936 podStartE2EDuration="3.317834226s" podCreationTimestamp="2026-02-01 07:33:23 +0000 UTC" firstStartedPulling="2026-02-01 07:33:24.5313109 +0000 UTC m=+677.651747354" lastFinishedPulling="2026-02-01 07:33:25.52999321 +0000 UTC m=+678.650429644" observedRunningTime="2026-02-01 07:33:26.31282196 +0000 UTC m=+679.433258434" watchObservedRunningTime="2026-02-01 07:33:26.317834226 +0000 UTC m=+679.438270700" Feb 01 07:33:26 crc kubenswrapper[4835]: I0201 07:33:26.850396 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-vzd94"] Feb 01 07:33:27 crc kubenswrapper[4835]: I0201 07:33:27.344877 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7ldwd" Feb 01 07:33:27 crc kubenswrapper[4835]: I0201 07:33:27.347385 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-9qwwp" Feb 01 07:33:27 crc kubenswrapper[4835]: I0201 07:33:27.467600 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-hgssn"] Feb 01 07:33:27 crc kubenswrapper[4835]: I0201 07:33:27.468372 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-hgssn" Feb 01 07:33:27 crc kubenswrapper[4835]: I0201 07:33:27.476170 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-hgssn"] Feb 01 07:33:27 crc kubenswrapper[4835]: I0201 07:33:27.541762 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tqjr\" (UniqueName: \"kubernetes.io/projected/bc494048-8b2c-4d2e-925e-8b1b779dab89-kube-api-access-8tqjr\") pod \"mariadb-operator-index-hgssn\" (UID: \"bc494048-8b2c-4d2e-925e-8b1b779dab89\") " pod="openstack-operators/mariadb-operator-index-hgssn" Feb 01 07:33:27 crc kubenswrapper[4835]: I0201 07:33:27.644159 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tqjr\" (UniqueName: \"kubernetes.io/projected/bc494048-8b2c-4d2e-925e-8b1b779dab89-kube-api-access-8tqjr\") pod \"mariadb-operator-index-hgssn\" (UID: \"bc494048-8b2c-4d2e-925e-8b1b779dab89\") " pod="openstack-operators/mariadb-operator-index-hgssn" Feb 01 07:33:27 crc kubenswrapper[4835]: I0201 07:33:27.669457 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tqjr\" (UniqueName: \"kubernetes.io/projected/bc494048-8b2c-4d2e-925e-8b1b779dab89-kube-api-access-8tqjr\") pod \"mariadb-operator-index-hgssn\" (UID: \"bc494048-8b2c-4d2e-925e-8b1b779dab89\") " pod="openstack-operators/mariadb-operator-index-hgssn" Feb 01 07:33:27 crc kubenswrapper[4835]: I0201 07:33:27.781469 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-hgssn" Feb 01 07:33:28 crc kubenswrapper[4835]: I0201 07:33:28.195395 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-hgssn"] Feb 01 07:33:28 crc kubenswrapper[4835]: I0201 07:33:28.302863 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-hgssn" event={"ID":"bc494048-8b2c-4d2e-925e-8b1b779dab89","Type":"ContainerStarted","Data":"095297a208669352587746dd3adcf3244e16253140f3b49a138f639f2322d82a"} Feb 01 07:33:28 crc kubenswrapper[4835]: I0201 07:33:28.303069 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-vzd94" podUID="61daec47-a8bc-4ead-9f76-fcf5fca43147" containerName="registry-server" containerID="cri-o://7fe581ac06c8d9bbe9a6da9878e81a9db49b53afc871b3268c9a15241b9c7d55" gracePeriod=2 Feb 01 07:33:29 crc kubenswrapper[4835]: I0201 07:33:29.344200 4835 generic.go:334] "Generic (PLEG): container finished" podID="61daec47-a8bc-4ead-9f76-fcf5fca43147" containerID="7fe581ac06c8d9bbe9a6da9878e81a9db49b53afc871b3268c9a15241b9c7d55" exitCode=0 Feb 01 07:33:29 crc kubenswrapper[4835]: I0201 07:33:29.344266 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-vzd94" event={"ID":"61daec47-a8bc-4ead-9f76-fcf5fca43147","Type":"ContainerDied","Data":"7fe581ac06c8d9bbe9a6da9878e81a9db49b53afc871b3268c9a15241b9c7d55"} Feb 01 07:33:29 crc kubenswrapper[4835]: I0201 07:33:29.549606 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-vzd94" Feb 01 07:33:29 crc kubenswrapper[4835]: I0201 07:33:29.570404 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfndf\" (UniqueName: \"kubernetes.io/projected/61daec47-a8bc-4ead-9f76-fcf5fca43147-kube-api-access-rfndf\") pod \"61daec47-a8bc-4ead-9f76-fcf5fca43147\" (UID: \"61daec47-a8bc-4ead-9f76-fcf5fca43147\") " Feb 01 07:33:29 crc kubenswrapper[4835]: I0201 07:33:29.588643 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61daec47-a8bc-4ead-9f76-fcf5fca43147-kube-api-access-rfndf" (OuterVolumeSpecName: "kube-api-access-rfndf") pod "61daec47-a8bc-4ead-9f76-fcf5fca43147" (UID: "61daec47-a8bc-4ead-9f76-fcf5fca43147"). InnerVolumeSpecName "kube-api-access-rfndf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:33:29 crc kubenswrapper[4835]: I0201 07:33:29.672808 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfndf\" (UniqueName: \"kubernetes.io/projected/61daec47-a8bc-4ead-9f76-fcf5fca43147-kube-api-access-rfndf\") on node \"crc\" DevicePath \"\"" Feb 01 07:33:30 crc kubenswrapper[4835]: I0201 07:33:30.350346 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-vzd94" event={"ID":"61daec47-a8bc-4ead-9f76-fcf5fca43147","Type":"ContainerDied","Data":"e228621a4370f2d9433fb96f0949b6e19537d374c81dc6d6a0e3bc6eb82fd894"} Feb 01 07:33:30 crc kubenswrapper[4835]: I0201 07:33:30.350396 4835 scope.go:117] "RemoveContainer" containerID="7fe581ac06c8d9bbe9a6da9878e81a9db49b53afc871b3268c9a15241b9c7d55" Feb 01 07:33:30 crc kubenswrapper[4835]: I0201 07:33:30.350500 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-vzd94" Feb 01 07:33:30 crc kubenswrapper[4835]: I0201 07:33:30.356190 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-hgssn" event={"ID":"bc494048-8b2c-4d2e-925e-8b1b779dab89","Type":"ContainerStarted","Data":"00535660be0470f57ec6a455366d6aae9b9f2d8d7e55f5991b7f07020dd58c09"} Feb 01 07:33:30 crc kubenswrapper[4835]: I0201 07:33:30.383253 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-hgssn" podStartSLOduration=2.363307875 podStartE2EDuration="3.383232941s" podCreationTimestamp="2026-02-01 07:33:27 +0000 UTC" firstStartedPulling="2026-02-01 07:33:28.2141109 +0000 UTC m=+681.334547334" lastFinishedPulling="2026-02-01 07:33:29.234035956 +0000 UTC m=+682.354472400" observedRunningTime="2026-02-01 07:33:30.380712913 +0000 UTC m=+683.501149387" watchObservedRunningTime="2026-02-01 07:33:30.383232941 +0000 UTC m=+683.503669375" Feb 01 07:33:30 crc kubenswrapper[4835]: I0201 07:33:30.397952 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-vzd94"] Feb 01 07:33:30 crc kubenswrapper[4835]: I0201 07:33:30.408549 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-vzd94"] Feb 01 07:33:31 crc kubenswrapper[4835]: I0201 07:33:31.575526 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61daec47-a8bc-4ead-9f76-fcf5fca43147" path="/var/lib/kubelet/pods/61daec47-a8bc-4ead-9f76-fcf5fca43147/volumes" Feb 01 07:33:37 crc kubenswrapper[4835]: I0201 07:33:37.781722 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/mariadb-operator-index-hgssn" Feb 01 07:33:37 crc kubenswrapper[4835]: I0201 07:33:37.782455 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-index-hgssn" Feb 01 07:33:37 crc kubenswrapper[4835]: I0201 07:33:37.839792 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/mariadb-operator-index-hgssn" Feb 01 07:33:38 crc kubenswrapper[4835]: I0201 07:33:38.466499 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-index-hgssn" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.819088 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d"] Feb 01 07:33:42 crc kubenswrapper[4835]: E0201 07:33:42.819797 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61daec47-a8bc-4ead-9f76-fcf5fca43147" containerName="registry-server" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.819820 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="61daec47-a8bc-4ead-9f76-fcf5fca43147" containerName="registry-server" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.820060 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="61daec47-a8bc-4ead-9f76-fcf5fca43147" containerName="registry-server" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.821386 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.823887 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-j4xxm" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.840563 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d"] Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.854275 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/147369ac-5553-4aa7-944b-878065951228-util\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d\" (UID: \"147369ac-5553-4aa7-944b-878065951228\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.854336 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/147369ac-5553-4aa7-944b-878065951228-bundle\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d\" (UID: \"147369ac-5553-4aa7-944b-878065951228\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.854677 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z74gg\" (UniqueName: \"kubernetes.io/projected/147369ac-5553-4aa7-944b-878065951228-kube-api-access-z74gg\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d\" (UID: \"147369ac-5553-4aa7-944b-878065951228\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.955836 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z74gg\" (UniqueName: \"kubernetes.io/projected/147369ac-5553-4aa7-944b-878065951228-kube-api-access-z74gg\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d\" (UID: \"147369ac-5553-4aa7-944b-878065951228\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.955937 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/147369ac-5553-4aa7-944b-878065951228-util\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d\" (UID: \"147369ac-5553-4aa7-944b-878065951228\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.956011 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/147369ac-5553-4aa7-944b-878065951228-bundle\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d\" (UID: \"147369ac-5553-4aa7-944b-878065951228\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.956608 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/147369ac-5553-4aa7-944b-878065951228-util\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d\" (UID: \"147369ac-5553-4aa7-944b-878065951228\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.956727 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/147369ac-5553-4aa7-944b-878065951228-bundle\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d\" (UID: \"147369ac-5553-4aa7-944b-878065951228\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" Feb 01 07:33:42 crc kubenswrapper[4835]: I0201 07:33:42.991841 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z74gg\" (UniqueName: \"kubernetes.io/projected/147369ac-5553-4aa7-944b-878065951228-kube-api-access-z74gg\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d\" (UID: \"147369ac-5553-4aa7-944b-878065951228\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" Feb 01 07:33:43 crc kubenswrapper[4835]: I0201 07:33:43.151848 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" Feb 01 07:33:43 crc kubenswrapper[4835]: I0201 07:33:43.701728 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d"] Feb 01 07:33:43 crc kubenswrapper[4835]: W0201 07:33:43.723921 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod147369ac_5553_4aa7_944b_878065951228.slice/crio-d573702509e39abe1f6be9b25d45427b6341b4d0d68e7744d35c16256ab074ec WatchSource:0}: Error finding container d573702509e39abe1f6be9b25d45427b6341b4d0d68e7744d35c16256ab074ec: Status 404 returned error can't find the container with id d573702509e39abe1f6be9b25d45427b6341b4d0d68e7744d35c16256ab074ec Feb 01 07:33:44 crc kubenswrapper[4835]: I0201 07:33:44.473271 4835 generic.go:334] "Generic (PLEG): container finished" podID="147369ac-5553-4aa7-944b-878065951228" containerID="4c04c0aadb0582b3c423a84a41aff698e1c915ae6ed84c5785dce5be5bc1aae5" exitCode=0 Feb 01 07:33:44 crc kubenswrapper[4835]: I0201 07:33:44.473395 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" event={"ID":"147369ac-5553-4aa7-944b-878065951228","Type":"ContainerDied","Data":"4c04c0aadb0582b3c423a84a41aff698e1c915ae6ed84c5785dce5be5bc1aae5"} Feb 01 07:33:44 crc kubenswrapper[4835]: I0201 07:33:44.473722 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" event={"ID":"147369ac-5553-4aa7-944b-878065951228","Type":"ContainerStarted","Data":"d573702509e39abe1f6be9b25d45427b6341b4d0d68e7744d35c16256ab074ec"} Feb 01 07:33:45 crc kubenswrapper[4835]: I0201 07:33:45.484602 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" event={"ID":"147369ac-5553-4aa7-944b-878065951228","Type":"ContainerStarted","Data":"a5b34c3269d9077c1868416ef9788156afaffaf20ad86ae96b120830561501d3"} Feb 01 07:33:46 crc kubenswrapper[4835]: I0201 07:33:46.494954 4835 generic.go:334] "Generic (PLEG): container finished" podID="147369ac-5553-4aa7-944b-878065951228" containerID="a5b34c3269d9077c1868416ef9788156afaffaf20ad86ae96b120830561501d3" exitCode=0 Feb 01 07:33:46 crc kubenswrapper[4835]: I0201 07:33:46.495038 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" event={"ID":"147369ac-5553-4aa7-944b-878065951228","Type":"ContainerDied","Data":"a5b34c3269d9077c1868416ef9788156afaffaf20ad86ae96b120830561501d3"} Feb 01 07:33:47 crc kubenswrapper[4835]: I0201 07:33:47.506049 4835 generic.go:334] "Generic (PLEG): container finished" podID="147369ac-5553-4aa7-944b-878065951228" containerID="f91f806086054776a6bb00c5c22c3d8c35dd533d1f5bd6037500d74cf0533b06" exitCode=0 Feb 01 07:33:47 crc kubenswrapper[4835]: I0201 07:33:47.506229 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" event={"ID":"147369ac-5553-4aa7-944b-878065951228","Type":"ContainerDied","Data":"f91f806086054776a6bb00c5c22c3d8c35dd533d1f5bd6037500d74cf0533b06"} Feb 01 07:33:48 crc kubenswrapper[4835]: I0201 07:33:48.781085 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" Feb 01 07:33:48 crc kubenswrapper[4835]: I0201 07:33:48.942895 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/147369ac-5553-4aa7-944b-878065951228-util\") pod \"147369ac-5553-4aa7-944b-878065951228\" (UID: \"147369ac-5553-4aa7-944b-878065951228\") " Feb 01 07:33:48 crc kubenswrapper[4835]: I0201 07:33:48.943021 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/147369ac-5553-4aa7-944b-878065951228-bundle\") pod \"147369ac-5553-4aa7-944b-878065951228\" (UID: \"147369ac-5553-4aa7-944b-878065951228\") " Feb 01 07:33:48 crc kubenswrapper[4835]: I0201 07:33:48.943097 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z74gg\" (UniqueName: \"kubernetes.io/projected/147369ac-5553-4aa7-944b-878065951228-kube-api-access-z74gg\") pod \"147369ac-5553-4aa7-944b-878065951228\" (UID: \"147369ac-5553-4aa7-944b-878065951228\") " Feb 01 07:33:48 crc kubenswrapper[4835]: I0201 07:33:48.944572 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/147369ac-5553-4aa7-944b-878065951228-bundle" (OuterVolumeSpecName: "bundle") pod "147369ac-5553-4aa7-944b-878065951228" (UID: "147369ac-5553-4aa7-944b-878065951228"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:33:48 crc kubenswrapper[4835]: I0201 07:33:48.951226 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/147369ac-5553-4aa7-944b-878065951228-kube-api-access-z74gg" (OuterVolumeSpecName: "kube-api-access-z74gg") pod "147369ac-5553-4aa7-944b-878065951228" (UID: "147369ac-5553-4aa7-944b-878065951228"). InnerVolumeSpecName "kube-api-access-z74gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:33:48 crc kubenswrapper[4835]: I0201 07:33:48.961139 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/147369ac-5553-4aa7-944b-878065951228-util" (OuterVolumeSpecName: "util") pod "147369ac-5553-4aa7-944b-878065951228" (UID: "147369ac-5553-4aa7-944b-878065951228"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:33:49 crc kubenswrapper[4835]: I0201 07:33:49.044995 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z74gg\" (UniqueName: \"kubernetes.io/projected/147369ac-5553-4aa7-944b-878065951228-kube-api-access-z74gg\") on node \"crc\" DevicePath \"\"" Feb 01 07:33:49 crc kubenswrapper[4835]: I0201 07:33:49.045097 4835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/147369ac-5553-4aa7-944b-878065951228-util\") on node \"crc\" DevicePath \"\"" Feb 01 07:33:49 crc kubenswrapper[4835]: I0201 07:33:49.045131 4835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/147369ac-5553-4aa7-944b-878065951228-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:33:49 crc kubenswrapper[4835]: I0201 07:33:49.523768 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" event={"ID":"147369ac-5553-4aa7-944b-878065951228","Type":"ContainerDied","Data":"d573702509e39abe1f6be9b25d45427b6341b4d0d68e7744d35c16256ab074ec"} Feb 01 07:33:49 crc kubenswrapper[4835]: I0201 07:33:49.523857 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d573702509e39abe1f6be9b25d45427b6341b4d0d68e7744d35c16256ab074ec" Feb 01 07:33:49 crc kubenswrapper[4835]: I0201 07:33:49.523864 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.049119 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd"] Feb 01 07:33:57 crc kubenswrapper[4835]: E0201 07:33:57.050481 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147369ac-5553-4aa7-944b-878065951228" containerName="util" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.050507 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="147369ac-5553-4aa7-944b-878065951228" containerName="util" Feb 01 07:33:57 crc kubenswrapper[4835]: E0201 07:33:57.050611 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147369ac-5553-4aa7-944b-878065951228" containerName="pull" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.050622 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="147369ac-5553-4aa7-944b-878065951228" containerName="pull" Feb 01 07:33:57 crc kubenswrapper[4835]: E0201 07:33:57.050649 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147369ac-5553-4aa7-944b-878065951228" containerName="extract" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.050657 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="147369ac-5553-4aa7-944b-878065951228" containerName="extract" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.051146 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="147369ac-5553-4aa7-944b-878065951228" containerName="extract" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.052199 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.065054 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.066070 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-4pq2v" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.066172 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-service-cert" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.081005 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd"] Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.166596 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/73820432-e4ca-45a7-ae9c-77a538ce1d20-apiservice-cert\") pod \"mariadb-operator-controller-manager-5fc7bf5575-vbqwd\" (UID: \"73820432-e4ca-45a7-ae9c-77a538ce1d20\") " pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.166660 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/73820432-e4ca-45a7-ae9c-77a538ce1d20-webhook-cert\") pod \"mariadb-operator-controller-manager-5fc7bf5575-vbqwd\" (UID: \"73820432-e4ca-45a7-ae9c-77a538ce1d20\") " pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.166751 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v44b\" (UniqueName: \"kubernetes.io/projected/73820432-e4ca-45a7-ae9c-77a538ce1d20-kube-api-access-4v44b\") pod \"mariadb-operator-controller-manager-5fc7bf5575-vbqwd\" (UID: \"73820432-e4ca-45a7-ae9c-77a538ce1d20\") " pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.267916 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/73820432-e4ca-45a7-ae9c-77a538ce1d20-apiservice-cert\") pod \"mariadb-operator-controller-manager-5fc7bf5575-vbqwd\" (UID: \"73820432-e4ca-45a7-ae9c-77a538ce1d20\") " pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.268023 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/73820432-e4ca-45a7-ae9c-77a538ce1d20-webhook-cert\") pod \"mariadb-operator-controller-manager-5fc7bf5575-vbqwd\" (UID: \"73820432-e4ca-45a7-ae9c-77a538ce1d20\") " pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.268263 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v44b\" (UniqueName: \"kubernetes.io/projected/73820432-e4ca-45a7-ae9c-77a538ce1d20-kube-api-access-4v44b\") pod \"mariadb-operator-controller-manager-5fc7bf5575-vbqwd\" (UID: \"73820432-e4ca-45a7-ae9c-77a538ce1d20\") " pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.274732 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/73820432-e4ca-45a7-ae9c-77a538ce1d20-apiservice-cert\") pod \"mariadb-operator-controller-manager-5fc7bf5575-vbqwd\" (UID: \"73820432-e4ca-45a7-ae9c-77a538ce1d20\") " pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.274827 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/73820432-e4ca-45a7-ae9c-77a538ce1d20-webhook-cert\") pod \"mariadb-operator-controller-manager-5fc7bf5575-vbqwd\" (UID: \"73820432-e4ca-45a7-ae9c-77a538ce1d20\") " pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.289940 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v44b\" (UniqueName: \"kubernetes.io/projected/73820432-e4ca-45a7-ae9c-77a538ce1d20-kube-api-access-4v44b\") pod \"mariadb-operator-controller-manager-5fc7bf5575-vbqwd\" (UID: \"73820432-e4ca-45a7-ae9c-77a538ce1d20\") " pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.390768 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" Feb 01 07:33:57 crc kubenswrapper[4835]: I0201 07:33:57.643117 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd"] Feb 01 07:33:58 crc kubenswrapper[4835]: I0201 07:33:58.585225 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" event={"ID":"73820432-e4ca-45a7-ae9c-77a538ce1d20","Type":"ContainerStarted","Data":"fe87356c43c61ae626b92e1fc497af9eb35c84218fac5ca5f3154727b19e8a50"} Feb 01 07:34:01 crc kubenswrapper[4835]: I0201 07:34:01.604829 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" event={"ID":"73820432-e4ca-45a7-ae9c-77a538ce1d20","Type":"ContainerStarted","Data":"dc732c1444db053ba7c64a76b53a633a3e4530b6828a731db27d025313abf3db"} Feb 01 07:34:01 crc kubenswrapper[4835]: I0201 07:34:01.605283 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" Feb 01 07:34:01 crc kubenswrapper[4835]: I0201 07:34:01.627764 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" podStartSLOduration=1.173798282 podStartE2EDuration="4.6277403s" podCreationTimestamp="2026-02-01 07:33:57 +0000 UTC" firstStartedPulling="2026-02-01 07:33:57.661480947 +0000 UTC m=+710.781917381" lastFinishedPulling="2026-02-01 07:34:01.115422965 +0000 UTC m=+714.235859399" observedRunningTime="2026-02-01 07:34:01.624992818 +0000 UTC m=+714.745429282" watchObservedRunningTime="2026-02-01 07:34:01.6277403 +0000 UTC m=+714.748176754" Feb 01 07:34:07 crc kubenswrapper[4835]: I0201 07:34:07.398769 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-5fc7bf5575-vbqwd" Feb 01 07:34:13 crc kubenswrapper[4835]: I0201 07:34:13.698944 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-x9r54"] Feb 01 07:34:13 crc kubenswrapper[4835]: I0201 07:34:13.702094 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-x9r54" Feb 01 07:34:13 crc kubenswrapper[4835]: I0201 07:34:13.705694 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-index-dockercfg-gpxdj" Feb 01 07:34:13 crc kubenswrapper[4835]: I0201 07:34:13.720459 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-x9r54"] Feb 01 07:34:13 crc kubenswrapper[4835]: I0201 07:34:13.734983 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lftl\" (UniqueName: \"kubernetes.io/projected/c754e3d7-d607-4427-b349-b5c22df261ec-kube-api-access-2lftl\") pod \"infra-operator-index-x9r54\" (UID: \"c754e3d7-d607-4427-b349-b5c22df261ec\") " pod="openstack-operators/infra-operator-index-x9r54" Feb 01 07:34:13 crc kubenswrapper[4835]: I0201 07:34:13.836233 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lftl\" (UniqueName: \"kubernetes.io/projected/c754e3d7-d607-4427-b349-b5c22df261ec-kube-api-access-2lftl\") pod \"infra-operator-index-x9r54\" (UID: \"c754e3d7-d607-4427-b349-b5c22df261ec\") " pod="openstack-operators/infra-operator-index-x9r54" Feb 01 07:34:13 crc kubenswrapper[4835]: I0201 07:34:13.862018 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lftl\" (UniqueName: \"kubernetes.io/projected/c754e3d7-d607-4427-b349-b5c22df261ec-kube-api-access-2lftl\") pod \"infra-operator-index-x9r54\" (UID: \"c754e3d7-d607-4427-b349-b5c22df261ec\") " pod="openstack-operators/infra-operator-index-x9r54" Feb 01 07:34:14 crc kubenswrapper[4835]: I0201 07:34:14.034348 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-x9r54" Feb 01 07:34:14 crc kubenswrapper[4835]: I0201 07:34:14.546312 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-x9r54"] Feb 01 07:34:14 crc kubenswrapper[4835]: W0201 07:34:14.555236 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc754e3d7_d607_4427_b349_b5c22df261ec.slice/crio-661626f5d57ce8829d807379c8f6446cd04b089fc237bd653180432fdd96099d WatchSource:0}: Error finding container 661626f5d57ce8829d807379c8f6446cd04b089fc237bd653180432fdd96099d: Status 404 returned error can't find the container with id 661626f5d57ce8829d807379c8f6446cd04b089fc237bd653180432fdd96099d Feb 01 07:34:14 crc kubenswrapper[4835]: I0201 07:34:14.698693 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-x9r54" event={"ID":"c754e3d7-d607-4427-b349-b5c22df261ec","Type":"ContainerStarted","Data":"661626f5d57ce8829d807379c8f6446cd04b089fc237bd653180432fdd96099d"} Feb 01 07:34:16 crc kubenswrapper[4835]: I0201 07:34:16.711626 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-x9r54" event={"ID":"c754e3d7-d607-4427-b349-b5c22df261ec","Type":"ContainerStarted","Data":"d6fe90ef260d00d9323d7bac74882c87a064150e0506e70d2167ad57d285ccd5"} Feb 01 07:34:16 crc kubenswrapper[4835]: I0201 07:34:16.731578 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-x9r54" podStartSLOduration=1.985478617 podStartE2EDuration="3.73156103s" podCreationTimestamp="2026-02-01 07:34:13 +0000 UTC" firstStartedPulling="2026-02-01 07:34:14.557981672 +0000 UTC m=+727.678418146" lastFinishedPulling="2026-02-01 07:34:16.304064085 +0000 UTC m=+729.424500559" observedRunningTime="2026-02-01 07:34:16.731128349 +0000 UTC m=+729.851564803" watchObservedRunningTime="2026-02-01 07:34:16.73156103 +0000 UTC m=+729.851997464" Feb 01 07:34:24 crc kubenswrapper[4835]: I0201 07:34:24.034666 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-index-x9r54" Feb 01 07:34:24 crc kubenswrapper[4835]: I0201 07:34:24.035013 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/infra-operator-index-x9r54" Feb 01 07:34:24 crc kubenswrapper[4835]: I0201 07:34:24.070350 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/infra-operator-index-x9r54" Feb 01 07:34:24 crc kubenswrapper[4835]: I0201 07:34:24.812269 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-index-x9r54" Feb 01 07:34:25 crc kubenswrapper[4835]: I0201 07:34:25.192395 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:34:25 crc kubenswrapper[4835]: I0201 07:34:25.192563 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:34:26 crc kubenswrapper[4835]: I0201 07:34:26.959284 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4"] Feb 01 07:34:26 crc kubenswrapper[4835]: I0201 07:34:26.960613 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" Feb 01 07:34:26 crc kubenswrapper[4835]: I0201 07:34:26.963371 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-j4xxm" Feb 01 07:34:26 crc kubenswrapper[4835]: I0201 07:34:26.979344 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4"] Feb 01 07:34:27 crc kubenswrapper[4835]: I0201 07:34:27.130196 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4326f882-2be0-41a9-b71d-14e811ba9343-util\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4\" (UID: \"4326f882-2be0-41a9-b71d-14e811ba9343\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" Feb 01 07:34:27 crc kubenswrapper[4835]: I0201 07:34:27.130251 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4326f882-2be0-41a9-b71d-14e811ba9343-bundle\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4\" (UID: \"4326f882-2be0-41a9-b71d-14e811ba9343\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" Feb 01 07:34:27 crc kubenswrapper[4835]: I0201 07:34:27.130304 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbw7m\" (UniqueName: \"kubernetes.io/projected/4326f882-2be0-41a9-b71d-14e811ba9343-kube-api-access-cbw7m\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4\" (UID: \"4326f882-2be0-41a9-b71d-14e811ba9343\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" Feb 01 07:34:27 crc kubenswrapper[4835]: I0201 07:34:27.231657 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4326f882-2be0-41a9-b71d-14e811ba9343-util\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4\" (UID: \"4326f882-2be0-41a9-b71d-14e811ba9343\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" Feb 01 07:34:27 crc kubenswrapper[4835]: I0201 07:34:27.231772 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4326f882-2be0-41a9-b71d-14e811ba9343-bundle\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4\" (UID: \"4326f882-2be0-41a9-b71d-14e811ba9343\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" Feb 01 07:34:27 crc kubenswrapper[4835]: I0201 07:34:27.231880 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbw7m\" (UniqueName: \"kubernetes.io/projected/4326f882-2be0-41a9-b71d-14e811ba9343-kube-api-access-cbw7m\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4\" (UID: \"4326f882-2be0-41a9-b71d-14e811ba9343\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" Feb 01 07:34:27 crc kubenswrapper[4835]: I0201 07:34:27.232569 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4326f882-2be0-41a9-b71d-14e811ba9343-bundle\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4\" (UID: \"4326f882-2be0-41a9-b71d-14e811ba9343\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" Feb 01 07:34:27 crc kubenswrapper[4835]: I0201 07:34:27.232720 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4326f882-2be0-41a9-b71d-14e811ba9343-util\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4\" (UID: \"4326f882-2be0-41a9-b71d-14e811ba9343\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" Feb 01 07:34:27 crc kubenswrapper[4835]: I0201 07:34:27.262918 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbw7m\" (UniqueName: \"kubernetes.io/projected/4326f882-2be0-41a9-b71d-14e811ba9343-kube-api-access-cbw7m\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4\" (UID: \"4326f882-2be0-41a9-b71d-14e811ba9343\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" Feb 01 07:34:27 crc kubenswrapper[4835]: I0201 07:34:27.331348 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" Feb 01 07:34:27 crc kubenswrapper[4835]: I0201 07:34:27.637536 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4"] Feb 01 07:34:27 crc kubenswrapper[4835]: W0201 07:34:27.647838 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4326f882_2be0_41a9_b71d_14e811ba9343.slice/crio-921256d20a3de711ad5d3567cd7908b61c7f1e0b4d2d8cfc13ded53d8a7ffa7a WatchSource:0}: Error finding container 921256d20a3de711ad5d3567cd7908b61c7f1e0b4d2d8cfc13ded53d8a7ffa7a: Status 404 returned error can't find the container with id 921256d20a3de711ad5d3567cd7908b61c7f1e0b4d2d8cfc13ded53d8a7ffa7a Feb 01 07:34:27 crc kubenswrapper[4835]: I0201 07:34:27.795900 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" event={"ID":"4326f882-2be0-41a9-b71d-14e811ba9343","Type":"ContainerStarted","Data":"921256d20a3de711ad5d3567cd7908b61c7f1e0b4d2d8cfc13ded53d8a7ffa7a"} Feb 01 07:34:28 crc kubenswrapper[4835]: I0201 07:34:28.806789 4835 generic.go:334] "Generic (PLEG): container finished" podID="4326f882-2be0-41a9-b71d-14e811ba9343" containerID="9c9b24f7ffd0500deb0b44af392fbcd90e3501df8690512163f2c78ecc5f2750" exitCode=0 Feb 01 07:34:28 crc kubenswrapper[4835]: I0201 07:34:28.806860 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" event={"ID":"4326f882-2be0-41a9-b71d-14e811ba9343","Type":"ContainerDied","Data":"9c9b24f7ffd0500deb0b44af392fbcd90e3501df8690512163f2c78ecc5f2750"} Feb 01 07:34:29 crc kubenswrapper[4835]: I0201 07:34:29.815370 4835 generic.go:334] "Generic (PLEG): container finished" podID="4326f882-2be0-41a9-b71d-14e811ba9343" containerID="80600d67a82faedb2773f73ef514dfb8a4b47134e6d54fcb4ca036b44387978b" exitCode=0 Feb 01 07:34:29 crc kubenswrapper[4835]: I0201 07:34:29.815496 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" event={"ID":"4326f882-2be0-41a9-b71d-14e811ba9343","Type":"ContainerDied","Data":"80600d67a82faedb2773f73ef514dfb8a4b47134e6d54fcb4ca036b44387978b"} Feb 01 07:34:30 crc kubenswrapper[4835]: I0201 07:34:30.833800 4835 generic.go:334] "Generic (PLEG): container finished" podID="4326f882-2be0-41a9-b71d-14e811ba9343" containerID="662c3275e759f91f96bafbb45c12983ad019e7e1b8c42648d4fe1b80527cc463" exitCode=0 Feb 01 07:34:30 crc kubenswrapper[4835]: I0201 07:34:30.833847 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" event={"ID":"4326f882-2be0-41a9-b71d-14e811ba9343","Type":"ContainerDied","Data":"662c3275e759f91f96bafbb45c12983ad019e7e1b8c42648d4fe1b80527cc463"} Feb 01 07:34:32 crc kubenswrapper[4835]: I0201 07:34:32.185127 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" Feb 01 07:34:32 crc kubenswrapper[4835]: I0201 07:34:32.304811 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbw7m\" (UniqueName: \"kubernetes.io/projected/4326f882-2be0-41a9-b71d-14e811ba9343-kube-api-access-cbw7m\") pod \"4326f882-2be0-41a9-b71d-14e811ba9343\" (UID: \"4326f882-2be0-41a9-b71d-14e811ba9343\") " Feb 01 07:34:32 crc kubenswrapper[4835]: I0201 07:34:32.304944 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4326f882-2be0-41a9-b71d-14e811ba9343-util\") pod \"4326f882-2be0-41a9-b71d-14e811ba9343\" (UID: \"4326f882-2be0-41a9-b71d-14e811ba9343\") " Feb 01 07:34:32 crc kubenswrapper[4835]: I0201 07:34:32.305216 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4326f882-2be0-41a9-b71d-14e811ba9343-bundle\") pod \"4326f882-2be0-41a9-b71d-14e811ba9343\" (UID: \"4326f882-2be0-41a9-b71d-14e811ba9343\") " Feb 01 07:34:32 crc kubenswrapper[4835]: I0201 07:34:32.319135 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4326f882-2be0-41a9-b71d-14e811ba9343-bundle" (OuterVolumeSpecName: "bundle") pod "4326f882-2be0-41a9-b71d-14e811ba9343" (UID: "4326f882-2be0-41a9-b71d-14e811ba9343"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:34:32 crc kubenswrapper[4835]: I0201 07:34:32.325093 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4326f882-2be0-41a9-b71d-14e811ba9343-kube-api-access-cbw7m" (OuterVolumeSpecName: "kube-api-access-cbw7m") pod "4326f882-2be0-41a9-b71d-14e811ba9343" (UID: "4326f882-2be0-41a9-b71d-14e811ba9343"). InnerVolumeSpecName "kube-api-access-cbw7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:34:32 crc kubenswrapper[4835]: I0201 07:34:32.333680 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4326f882-2be0-41a9-b71d-14e811ba9343-util" (OuterVolumeSpecName: "util") pod "4326f882-2be0-41a9-b71d-14e811ba9343" (UID: "4326f882-2be0-41a9-b71d-14e811ba9343"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:34:32 crc kubenswrapper[4835]: I0201 07:34:32.406790 4835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4326f882-2be0-41a9-b71d-14e811ba9343-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:34:32 crc kubenswrapper[4835]: I0201 07:34:32.406819 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbw7m\" (UniqueName: \"kubernetes.io/projected/4326f882-2be0-41a9-b71d-14e811ba9343-kube-api-access-cbw7m\") on node \"crc\" DevicePath \"\"" Feb 01 07:34:32 crc kubenswrapper[4835]: I0201 07:34:32.406830 4835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4326f882-2be0-41a9-b71d-14e811ba9343-util\") on node \"crc\" DevicePath \"\"" Feb 01 07:34:32 crc kubenswrapper[4835]: I0201 07:34:32.869722 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" event={"ID":"4326f882-2be0-41a9-b71d-14e811ba9343","Type":"ContainerDied","Data":"921256d20a3de711ad5d3567cd7908b61c7f1e0b4d2d8cfc13ded53d8a7ffa7a"} Feb 01 07:34:32 crc kubenswrapper[4835]: I0201 07:34:32.869783 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="921256d20a3de711ad5d3567cd7908b61c7f1e0b4d2d8cfc13ded53d8a7ffa7a" Feb 01 07:34:32 crc kubenswrapper[4835]: I0201 07:34:32.869802 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4" Feb 01 07:34:38 crc kubenswrapper[4835]: I0201 07:34:38.925765 4835 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.960055 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Feb 01 07:34:42 crc kubenswrapper[4835]: E0201 07:34:42.960832 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4326f882-2be0-41a9-b71d-14e811ba9343" containerName="pull" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.960855 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="4326f882-2be0-41a9-b71d-14e811ba9343" containerName="pull" Feb 01 07:34:42 crc kubenswrapper[4835]: E0201 07:34:42.960884 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4326f882-2be0-41a9-b71d-14e811ba9343" containerName="util" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.960897 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="4326f882-2be0-41a9-b71d-14e811ba9343" containerName="util" Feb 01 07:34:42 crc kubenswrapper[4835]: E0201 07:34:42.960918 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4326f882-2be0-41a9-b71d-14e811ba9343" containerName="extract" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.960930 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="4326f882-2be0-41a9-b71d-14e811ba9343" containerName="extract" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.961141 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="4326f882-2be0-41a9-b71d-14e811ba9343" containerName="extract" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.962155 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.965018 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"openstack-config-data" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.965291 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"openstack-scripts" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.966824 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"galera-openstack-dockercfg-cp2rc" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.970705 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"openshift-service-ca.crt" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.974129 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.975859 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.980798 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.981935 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.983036 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"kube-root-ca.crt" Feb 01 07:34:42 crc kubenswrapper[4835]: I0201 07:34:42.988146 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.028395 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.031700 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.067129 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x5pt\" (UniqueName: \"kubernetes.io/projected/d1414aa9-85a0-4ed8-b897-0afc315eacf6-kube-api-access-6x5pt\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.067194 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d1414aa9-85a0-4ed8-b897-0afc315eacf6-config-data-default\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.067276 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1414aa9-85a0-4ed8-b897-0afc315eacf6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.069649 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d1414aa9-85a0-4ed8-b897-0afc315eacf6-kolla-config\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.069687 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d1414aa9-85a0-4ed8-b897-0afc315eacf6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.069763 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.170580 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.170628 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7t9z\" (UniqueName: \"kubernetes.io/projected/b44d32e5-044c-42e2-a6c8-eb93e48219f2-kube-api-access-k7t9z\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.170659 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f271d73a-6ed8-4c97-b087-c6b3287c11e4-operator-scripts\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.170689 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b44d32e5-044c-42e2-a6c8-eb93e48219f2-kolla-config\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.170737 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x5pt\" (UniqueName: \"kubernetes.io/projected/d1414aa9-85a0-4ed8-b897-0afc315eacf6-kube-api-access-6x5pt\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.170832 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b44d32e5-044c-42e2-a6c8-eb93e48219f2-operator-scripts\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.170907 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f271d73a-6ed8-4c97-b087-c6b3287c11e4-kolla-config\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.170953 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.170990 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d1414aa9-85a0-4ed8-b897-0afc315eacf6-config-data-default\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.171024 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1414aa9-85a0-4ed8-b897-0afc315eacf6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.171106 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f271d73a-6ed8-4c97-b087-c6b3287c11e4-config-data-generated\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.171138 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdlcc\" (UniqueName: \"kubernetes.io/projected/f271d73a-6ed8-4c97-b087-c6b3287c11e4-kube-api-access-tdlcc\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.171143 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") device mount path \"/mnt/openstack/pv03\"" pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.171180 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b44d32e5-044c-42e2-a6c8-eb93e48219f2-config-data-default\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.171263 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b44d32e5-044c-42e2-a6c8-eb93e48219f2-config-data-generated\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.171317 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d1414aa9-85a0-4ed8-b897-0afc315eacf6-kolla-config\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.171352 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.171383 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d1414aa9-85a0-4ed8-b897-0afc315eacf6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.171458 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f271d73a-6ed8-4c97-b087-c6b3287c11e4-config-data-default\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.171877 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d1414aa9-85a0-4ed8-b897-0afc315eacf6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.172402 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d1414aa9-85a0-4ed8-b897-0afc315eacf6-kolla-config\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.172467 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d1414aa9-85a0-4ed8-b897-0afc315eacf6-config-data-default\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.173697 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1414aa9-85a0-4ed8-b897-0afc315eacf6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.200517 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x5pt\" (UniqueName: \"kubernetes.io/projected/d1414aa9-85a0-4ed8-b897-0afc315eacf6-kube-api-access-6x5pt\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.203135 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"d1414aa9-85a0-4ed8-b897-0afc315eacf6\") " pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.272623 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f271d73a-6ed8-4c97-b087-c6b3287c11e4-config-data-generated\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.272667 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdlcc\" (UniqueName: \"kubernetes.io/projected/f271d73a-6ed8-4c97-b087-c6b3287c11e4-kube-api-access-tdlcc\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.272698 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b44d32e5-044c-42e2-a6c8-eb93e48219f2-config-data-default\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.272715 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b44d32e5-044c-42e2-a6c8-eb93e48219f2-config-data-generated\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.272732 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.273176 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b44d32e5-044c-42e2-a6c8-eb93e48219f2-config-data-generated\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.273175 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") device mount path \"/mnt/openstack/pv01\"" pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.273237 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f271d73a-6ed8-4c97-b087-c6b3287c11e4-config-data-generated\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.273368 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f271d73a-6ed8-4c97-b087-c6b3287c11e4-config-data-default\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.273470 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7t9z\" (UniqueName: \"kubernetes.io/projected/b44d32e5-044c-42e2-a6c8-eb93e48219f2-kube-api-access-k7t9z\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.273531 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f271d73a-6ed8-4c97-b087-c6b3287c11e4-operator-scripts\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.273554 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b44d32e5-044c-42e2-a6c8-eb93e48219f2-config-data-default\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.273602 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b44d32e5-044c-42e2-a6c8-eb93e48219f2-kolla-config\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.273671 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b44d32e5-044c-42e2-a6c8-eb93e48219f2-operator-scripts\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.273720 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f271d73a-6ed8-4c97-b087-c6b3287c11e4-kolla-config\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.273766 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.274012 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f271d73a-6ed8-4c97-b087-c6b3287c11e4-config-data-default\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.274079 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") device mount path \"/mnt/openstack/pv09\"" pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.274203 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b44d32e5-044c-42e2-a6c8-eb93e48219f2-kolla-config\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.274685 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f271d73a-6ed8-4c97-b087-c6b3287c11e4-kolla-config\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.275158 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f271d73a-6ed8-4c97-b087-c6b3287c11e4-operator-scripts\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.276020 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b44d32e5-044c-42e2-a6c8-eb93e48219f2-operator-scripts\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.291713 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.292962 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7t9z\" (UniqueName: \"kubernetes.io/projected/b44d32e5-044c-42e2-a6c8-eb93e48219f2-kube-api-access-k7t9z\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.296618 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-1\" (UID: \"b44d32e5-044c-42e2-a6c8-eb93e48219f2\") " pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.297321 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdlcc\" (UniqueName: \"kubernetes.io/projected/f271d73a-6ed8-4c97-b087-c6b3287c11e4-kube-api-access-tdlcc\") pod \"openstack-galera-2\" (UID: \"f271d73a-6ed8-4c97-b087-c6b3287c11e4\") " pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.319973 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.332762 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.341853 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.606060 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv"] Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.607616 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.609852 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-n92d9" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.610074 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-service-cert" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.630313 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv"] Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.679136 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aeafdd64-5ab8-429a-9411-bdfe3e0780af-webhook-cert\") pod \"infra-operator-controller-manager-6f4d667fdd-rfzbv\" (UID: \"aeafdd64-5ab8-429a-9411-bdfe3e0780af\") " pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.679189 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm5r7\" (UniqueName: \"kubernetes.io/projected/aeafdd64-5ab8-429a-9411-bdfe3e0780af-kube-api-access-xm5r7\") pod \"infra-operator-controller-manager-6f4d667fdd-rfzbv\" (UID: \"aeafdd64-5ab8-429a-9411-bdfe3e0780af\") " pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.679283 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aeafdd64-5ab8-429a-9411-bdfe3e0780af-apiservice-cert\") pod \"infra-operator-controller-manager-6f4d667fdd-rfzbv\" (UID: \"aeafdd64-5ab8-429a-9411-bdfe3e0780af\") " pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.744394 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Feb 01 07:34:43 crc kubenswrapper[4835]: W0201 07:34:43.746592 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf271d73a_6ed8_4c97_b087_c6b3287c11e4.slice/crio-1d3810bd40290d001c50d611127deeb375a9c037efa9f0257f26a45a9804034a WatchSource:0}: Error finding container 1d3810bd40290d001c50d611127deeb375a9c037efa9f0257f26a45a9804034a: Status 404 returned error can't find the container with id 1d3810bd40290d001c50d611127deeb375a9c037efa9f0257f26a45a9804034a Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.780744 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aeafdd64-5ab8-429a-9411-bdfe3e0780af-apiservice-cert\") pod \"infra-operator-controller-manager-6f4d667fdd-rfzbv\" (UID: \"aeafdd64-5ab8-429a-9411-bdfe3e0780af\") " pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.780812 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aeafdd64-5ab8-429a-9411-bdfe3e0780af-webhook-cert\") pod \"infra-operator-controller-manager-6f4d667fdd-rfzbv\" (UID: \"aeafdd64-5ab8-429a-9411-bdfe3e0780af\") " pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.780845 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm5r7\" (UniqueName: \"kubernetes.io/projected/aeafdd64-5ab8-429a-9411-bdfe3e0780af-kube-api-access-xm5r7\") pod \"infra-operator-controller-manager-6f4d667fdd-rfzbv\" (UID: \"aeafdd64-5ab8-429a-9411-bdfe3e0780af\") " pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.787156 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/aeafdd64-5ab8-429a-9411-bdfe3e0780af-webhook-cert\") pod \"infra-operator-controller-manager-6f4d667fdd-rfzbv\" (UID: \"aeafdd64-5ab8-429a-9411-bdfe3e0780af\") " pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.789620 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/aeafdd64-5ab8-429a-9411-bdfe3e0780af-apiservice-cert\") pod \"infra-operator-controller-manager-6f4d667fdd-rfzbv\" (UID: \"aeafdd64-5ab8-429a-9411-bdfe3e0780af\") " pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.791848 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Feb 01 07:34:43 crc kubenswrapper[4835]: W0201 07:34:43.798990 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1414aa9_85a0_4ed8_b897_0afc315eacf6.slice/crio-3b0e6951be28475ffa536bb1320fcf2372d6afbdc45911840537b15dc1039aad WatchSource:0}: Error finding container 3b0e6951be28475ffa536bb1320fcf2372d6afbdc45911840537b15dc1039aad: Status 404 returned error can't find the container with id 3b0e6951be28475ffa536bb1320fcf2372d6afbdc45911840537b15dc1039aad Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.799673 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm5r7\" (UniqueName: \"kubernetes.io/projected/aeafdd64-5ab8-429a-9411-bdfe3e0780af-kube-api-access-xm5r7\") pod \"infra-operator-controller-manager-6f4d667fdd-rfzbv\" (UID: \"aeafdd64-5ab8-429a-9411-bdfe3e0780af\") " pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.800399 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.923687 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.941740 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"b44d32e5-044c-42e2-a6c8-eb93e48219f2","Type":"ContainerStarted","Data":"7e471ffb79fbd39d2af050977d6f3db82bc4757feb802c7704aeb2c0eca8ced0"} Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.942495 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"d1414aa9-85a0-4ed8-b897-0afc315eacf6","Type":"ContainerStarted","Data":"3b0e6951be28475ffa536bb1320fcf2372d6afbdc45911840537b15dc1039aad"} Feb 01 07:34:43 crc kubenswrapper[4835]: I0201 07:34:43.943080 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"f271d73a-6ed8-4c97-b087-c6b3287c11e4","Type":"ContainerStarted","Data":"1d3810bd40290d001c50d611127deeb375a9c037efa9f0257f26a45a9804034a"} Feb 01 07:34:44 crc kubenswrapper[4835]: I0201 07:34:44.099943 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv"] Feb 01 07:34:44 crc kubenswrapper[4835]: W0201 07:34:44.110882 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaeafdd64_5ab8_429a_9411_bdfe3e0780af.slice/crio-1672c7f959d52e883053e13c851994f62f2737d3c02c23a393421e694aa21675 WatchSource:0}: Error finding container 1672c7f959d52e883053e13c851994f62f2737d3c02c23a393421e694aa21675: Status 404 returned error can't find the container with id 1672c7f959d52e883053e13c851994f62f2737d3c02c23a393421e694aa21675 Feb 01 07:34:44 crc kubenswrapper[4835]: I0201 07:34:44.951222 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" event={"ID":"aeafdd64-5ab8-429a-9411-bdfe3e0780af","Type":"ContainerStarted","Data":"1672c7f959d52e883053e13c851994f62f2737d3c02c23a393421e694aa21675"} Feb 01 07:34:53 crc kubenswrapper[4835]: I0201 07:34:53.000539 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"f271d73a-6ed8-4c97-b087-c6b3287c11e4","Type":"ContainerStarted","Data":"3102c90c1c13ed3302574e01fc958fc256bd73d4817ea2a1f116bf8dc4be7f22"} Feb 01 07:34:53 crc kubenswrapper[4835]: I0201 07:34:53.002729 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"b44d32e5-044c-42e2-a6c8-eb93e48219f2","Type":"ContainerStarted","Data":"0025e4ef285b635223e56a201cfd8fde36b2b0eedf19340b5ef5dc6e2e9e082c"} Feb 01 07:34:53 crc kubenswrapper[4835]: I0201 07:34:53.004708 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" event={"ID":"aeafdd64-5ab8-429a-9411-bdfe3e0780af","Type":"ContainerStarted","Data":"92d3f3d392b746f124282379c6f72ca567746e7e14c773d94d3fcb1bccc20102"} Feb 01 07:34:53 crc kubenswrapper[4835]: I0201 07:34:53.004928 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" Feb 01 07:34:53 crc kubenswrapper[4835]: I0201 07:34:53.006588 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"d1414aa9-85a0-4ed8-b897-0afc315eacf6","Type":"ContainerStarted","Data":"62ad44fb76befa2a607f268f2d68073d67fe82504db5ad8b1a0ef4eff4c5da7b"} Feb 01 07:34:53 crc kubenswrapper[4835]: I0201 07:34:53.123215 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" podStartSLOduration=2.517100058 podStartE2EDuration="10.123182724s" podCreationTimestamp="2026-02-01 07:34:43 +0000 UTC" firstStartedPulling="2026-02-01 07:34:44.114500401 +0000 UTC m=+757.234936835" lastFinishedPulling="2026-02-01 07:34:51.720583067 +0000 UTC m=+764.841019501" observedRunningTime="2026-02-01 07:34:53.116651322 +0000 UTC m=+766.237087846" watchObservedRunningTime="2026-02-01 07:34:53.123182724 +0000 UTC m=+766.243619198" Feb 01 07:34:55 crc kubenswrapper[4835]: I0201 07:34:55.192209 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:34:55 crc kubenswrapper[4835]: I0201 07:34:55.192842 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:34:56 crc kubenswrapper[4835]: I0201 07:34:56.028761 4835 generic.go:334] "Generic (PLEG): container finished" podID="b44d32e5-044c-42e2-a6c8-eb93e48219f2" containerID="0025e4ef285b635223e56a201cfd8fde36b2b0eedf19340b5ef5dc6e2e9e082c" exitCode=0 Feb 01 07:34:56 crc kubenswrapper[4835]: I0201 07:34:56.028855 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"b44d32e5-044c-42e2-a6c8-eb93e48219f2","Type":"ContainerDied","Data":"0025e4ef285b635223e56a201cfd8fde36b2b0eedf19340b5ef5dc6e2e9e082c"} Feb 01 07:34:56 crc kubenswrapper[4835]: I0201 07:34:56.031964 4835 generic.go:334] "Generic (PLEG): container finished" podID="d1414aa9-85a0-4ed8-b897-0afc315eacf6" containerID="62ad44fb76befa2a607f268f2d68073d67fe82504db5ad8b1a0ef4eff4c5da7b" exitCode=0 Feb 01 07:34:56 crc kubenswrapper[4835]: I0201 07:34:56.032020 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"d1414aa9-85a0-4ed8-b897-0afc315eacf6","Type":"ContainerDied","Data":"62ad44fb76befa2a607f268f2d68073d67fe82504db5ad8b1a0ef4eff4c5da7b"} Feb 01 07:34:56 crc kubenswrapper[4835]: I0201 07:34:56.034972 4835 generic.go:334] "Generic (PLEG): container finished" podID="f271d73a-6ed8-4c97-b087-c6b3287c11e4" containerID="3102c90c1c13ed3302574e01fc958fc256bd73d4817ea2a1f116bf8dc4be7f22" exitCode=0 Feb 01 07:34:56 crc kubenswrapper[4835]: I0201 07:34:56.035011 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"f271d73a-6ed8-4c97-b087-c6b3287c11e4","Type":"ContainerDied","Data":"3102c90c1c13ed3302574e01fc958fc256bd73d4817ea2a1f116bf8dc4be7f22"} Feb 01 07:34:57 crc kubenswrapper[4835]: I0201 07:34:57.043507 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"b44d32e5-044c-42e2-a6c8-eb93e48219f2","Type":"ContainerStarted","Data":"6520b4b11e397559bd49700232e2ead795f17a06a1246be3adaf7e7ad5bfa961"} Feb 01 07:34:57 crc kubenswrapper[4835]: I0201 07:34:57.045783 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"d1414aa9-85a0-4ed8-b897-0afc315eacf6","Type":"ContainerStarted","Data":"57146b43238d7b8a5f249537accc3d9eaa5ea3c7779ae2ff051551cd15cbe2bf"} Feb 01 07:34:57 crc kubenswrapper[4835]: I0201 07:34:57.047642 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"f271d73a-6ed8-4c97-b087-c6b3287c11e4","Type":"ContainerStarted","Data":"39f54e10bdf1a5f7f7b43638c953b8eecae82ae7d71f742fdc764445e1ccc533"} Feb 01 07:34:57 crc kubenswrapper[4835]: I0201 07:34:57.076525 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/openstack-galera-1" podStartSLOduration=8.062633277 podStartE2EDuration="16.07650968s" podCreationTimestamp="2026-02-01 07:34:41 +0000 UTC" firstStartedPulling="2026-02-01 07:34:43.809693833 +0000 UTC m=+756.930130267" lastFinishedPulling="2026-02-01 07:34:51.823570226 +0000 UTC m=+764.944006670" observedRunningTime="2026-02-01 07:34:57.072839863 +0000 UTC m=+770.193276287" watchObservedRunningTime="2026-02-01 07:34:57.07650968 +0000 UTC m=+770.196946114" Feb 01 07:34:57 crc kubenswrapper[4835]: I0201 07:34:57.118159 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/openstack-galera-2" podStartSLOduration=7.99468188 podStartE2EDuration="16.118140225s" podCreationTimestamp="2026-02-01 07:34:41 +0000 UTC" firstStartedPulling="2026-02-01 07:34:43.748672188 +0000 UTC m=+756.869108622" lastFinishedPulling="2026-02-01 07:34:51.872130533 +0000 UTC m=+764.992566967" observedRunningTime="2026-02-01 07:34:57.109223511 +0000 UTC m=+770.229659945" watchObservedRunningTime="2026-02-01 07:34:57.118140225 +0000 UTC m=+770.238576659" Feb 01 07:34:57 crc kubenswrapper[4835]: I0201 07:34:57.128250 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/openstack-galera-0" podStartSLOduration=8.088479647 podStartE2EDuration="16.12822902s" podCreationTimestamp="2026-02-01 07:34:41 +0000 UTC" firstStartedPulling="2026-02-01 07:34:43.80045198 +0000 UTC m=+756.920888414" lastFinishedPulling="2026-02-01 07:34:51.840201343 +0000 UTC m=+764.960637787" observedRunningTime="2026-02-01 07:34:57.125274853 +0000 UTC m=+770.245711307" watchObservedRunningTime="2026-02-01 07:34:57.12822902 +0000 UTC m=+770.248665474" Feb 01 07:35:03 crc kubenswrapper[4835]: I0201 07:35:03.320880 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:35:03 crc kubenswrapper[4835]: I0201 07:35:03.321177 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:35:03 crc kubenswrapper[4835]: I0201 07:35:03.333042 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:35:03 crc kubenswrapper[4835]: I0201 07:35:03.333090 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:35:03 crc kubenswrapper[4835]: I0201 07:35:03.342960 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:35:03 crc kubenswrapper[4835]: I0201 07:35:03.343018 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:35:03 crc kubenswrapper[4835]: I0201 07:35:03.435640 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:35:03 crc kubenswrapper[4835]: I0201 07:35:03.929761 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6f4d667fdd-rfzbv" Feb 01 07:35:04 crc kubenswrapper[4835]: I0201 07:35:04.168629 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/openstack-galera-2" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.448281 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/memcached-0"] Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.449584 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/memcached-0" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.451782 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"memcached-config-data" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.452138 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"memcached-memcached-dockercfg-sj2kx" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.464865 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/memcached-0"] Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.551324 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/37529abc-a5d7-416b-8ea4-c6f0542ab3a8-kolla-config\") pod \"memcached-0\" (UID: \"37529abc-a5d7-416b-8ea4-c6f0542ab3a8\") " pod="swift-kuttl-tests/memcached-0" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.551404 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37529abc-a5d7-416b-8ea4-c6f0542ab3a8-config-data\") pod \"memcached-0\" (UID: \"37529abc-a5d7-416b-8ea4-c6f0542ab3a8\") " pod="swift-kuttl-tests/memcached-0" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.551446 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2jxq\" (UniqueName: \"kubernetes.io/projected/37529abc-a5d7-416b-8ea4-c6f0542ab3a8-kube-api-access-r2jxq\") pod \"memcached-0\" (UID: \"37529abc-a5d7-416b-8ea4-c6f0542ab3a8\") " pod="swift-kuttl-tests/memcached-0" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.652610 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/37529abc-a5d7-416b-8ea4-c6f0542ab3a8-kolla-config\") pod \"memcached-0\" (UID: \"37529abc-a5d7-416b-8ea4-c6f0542ab3a8\") " pod="swift-kuttl-tests/memcached-0" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.652673 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37529abc-a5d7-416b-8ea4-c6f0542ab3a8-config-data\") pod \"memcached-0\" (UID: \"37529abc-a5d7-416b-8ea4-c6f0542ab3a8\") " pod="swift-kuttl-tests/memcached-0" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.652695 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2jxq\" (UniqueName: \"kubernetes.io/projected/37529abc-a5d7-416b-8ea4-c6f0542ab3a8-kube-api-access-r2jxq\") pod \"memcached-0\" (UID: \"37529abc-a5d7-416b-8ea4-c6f0542ab3a8\") " pod="swift-kuttl-tests/memcached-0" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.655786 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"memcached-config-data" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.664739 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37529abc-a5d7-416b-8ea4-c6f0542ab3a8-config-data\") pod \"memcached-0\" (UID: \"37529abc-a5d7-416b-8ea4-c6f0542ab3a8\") " pod="swift-kuttl-tests/memcached-0" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.664745 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/37529abc-a5d7-416b-8ea4-c6f0542ab3a8-kolla-config\") pod \"memcached-0\" (UID: \"37529abc-a5d7-416b-8ea4-c6f0542ab3a8\") " pod="swift-kuttl-tests/memcached-0" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.679438 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2jxq\" (UniqueName: \"kubernetes.io/projected/37529abc-a5d7-416b-8ea4-c6f0542ab3a8-kube-api-access-r2jxq\") pod \"memcached-0\" (UID: \"37529abc-a5d7-416b-8ea4-c6f0542ab3a8\") " pod="swift-kuttl-tests/memcached-0" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.767337 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"memcached-memcached-dockercfg-sj2kx" Feb 01 07:35:07 crc kubenswrapper[4835]: I0201 07:35:07.776496 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/memcached-0" Feb 01 07:35:08 crc kubenswrapper[4835]: I0201 07:35:08.998501 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/memcached-0"] Feb 01 07:35:09 crc kubenswrapper[4835]: I0201 07:35:09.121217 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/memcached-0" event={"ID":"37529abc-a5d7-416b-8ea4-c6f0542ab3a8","Type":"ContainerStarted","Data":"4f04820b8f75f969f44cef21bb4b9f31f45d7190d2214ef49b7cbe3ffe8ac3bf"} Feb 01 07:35:10 crc kubenswrapper[4835]: I0201 07:35:10.323045 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-nztp8"] Feb 01 07:35:10 crc kubenswrapper[4835]: I0201 07:35:10.324329 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp8" Feb 01 07:35:10 crc kubenswrapper[4835]: I0201 07:35:10.327924 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-index-dockercfg-2dvzh" Feb 01 07:35:10 crc kubenswrapper[4835]: I0201 07:35:10.334394 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-nztp8"] Feb 01 07:35:10 crc kubenswrapper[4835]: I0201 07:35:10.389801 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9v5b\" (UniqueName: \"kubernetes.io/projected/be408dba-dcbf-40e4-9b83-cd67424ad82d-kube-api-access-d9v5b\") pod \"rabbitmq-cluster-operator-index-nztp8\" (UID: \"be408dba-dcbf-40e4-9b83-cd67424ad82d\") " pod="openstack-operators/rabbitmq-cluster-operator-index-nztp8" Feb 01 07:35:10 crc kubenswrapper[4835]: I0201 07:35:10.491256 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9v5b\" (UniqueName: \"kubernetes.io/projected/be408dba-dcbf-40e4-9b83-cd67424ad82d-kube-api-access-d9v5b\") pod \"rabbitmq-cluster-operator-index-nztp8\" (UID: \"be408dba-dcbf-40e4-9b83-cd67424ad82d\") " pod="openstack-operators/rabbitmq-cluster-operator-index-nztp8" Feb 01 07:35:10 crc kubenswrapper[4835]: I0201 07:35:10.506948 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9v5b\" (UniqueName: \"kubernetes.io/projected/be408dba-dcbf-40e4-9b83-cd67424ad82d-kube-api-access-d9v5b\") pod \"rabbitmq-cluster-operator-index-nztp8\" (UID: \"be408dba-dcbf-40e4-9b83-cd67424ad82d\") " pod="openstack-operators/rabbitmq-cluster-operator-index-nztp8" Feb 01 07:35:10 crc kubenswrapper[4835]: I0201 07:35:10.648288 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp8" Feb 01 07:35:10 crc kubenswrapper[4835]: I0201 07:35:10.861669 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-nztp8"] Feb 01 07:35:11 crc kubenswrapper[4835]: I0201 07:35:11.136046 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp8" event={"ID":"be408dba-dcbf-40e4-9b83-cd67424ad82d","Type":"ContainerStarted","Data":"62af7412c494861b55af2471e0613e66a5f97e9faedbd7e1992431b82f2e9547"} Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.051597 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/root-account-create-update-gmb7x"] Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.052296 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-gmb7x" Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.054181 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"openstack-mariadb-root-db-secret" Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.061307 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/root-account-create-update-gmb7x"] Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.113119 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a95fd7f-8f31-420b-a847-e13f61aa0ce9-operator-scripts\") pod \"root-account-create-update-gmb7x\" (UID: \"5a95fd7f-8f31-420b-a847-e13f61aa0ce9\") " pod="swift-kuttl-tests/root-account-create-update-gmb7x" Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.113555 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74qnb\" (UniqueName: \"kubernetes.io/projected/5a95fd7f-8f31-420b-a847-e13f61aa0ce9-kube-api-access-74qnb\") pod \"root-account-create-update-gmb7x\" (UID: \"5a95fd7f-8f31-420b-a847-e13f61aa0ce9\") " pod="swift-kuttl-tests/root-account-create-update-gmb7x" Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.142662 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/memcached-0" event={"ID":"37529abc-a5d7-416b-8ea4-c6f0542ab3a8","Type":"ContainerStarted","Data":"e2b27039e88a5fec5a52799cecf637333dc65696640cbb74a7d2047b185e305b"} Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.142801 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/memcached-0" Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.175132 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/memcached-0" podStartSLOduration=2.412485897 podStartE2EDuration="5.175118091s" podCreationTimestamp="2026-02-01 07:35:07 +0000 UTC" firstStartedPulling="2026-02-01 07:35:09.007572176 +0000 UTC m=+782.128008610" lastFinishedPulling="2026-02-01 07:35:11.77020437 +0000 UTC m=+784.890640804" observedRunningTime="2026-02-01 07:35:12.173806077 +0000 UTC m=+785.294242511" watchObservedRunningTime="2026-02-01 07:35:12.175118091 +0000 UTC m=+785.295554525" Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.215047 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a95fd7f-8f31-420b-a847-e13f61aa0ce9-operator-scripts\") pod \"root-account-create-update-gmb7x\" (UID: \"5a95fd7f-8f31-420b-a847-e13f61aa0ce9\") " pod="swift-kuttl-tests/root-account-create-update-gmb7x" Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.215125 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74qnb\" (UniqueName: \"kubernetes.io/projected/5a95fd7f-8f31-420b-a847-e13f61aa0ce9-kube-api-access-74qnb\") pod \"root-account-create-update-gmb7x\" (UID: \"5a95fd7f-8f31-420b-a847-e13f61aa0ce9\") " pod="swift-kuttl-tests/root-account-create-update-gmb7x" Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.216898 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a95fd7f-8f31-420b-a847-e13f61aa0ce9-operator-scripts\") pod \"root-account-create-update-gmb7x\" (UID: \"5a95fd7f-8f31-420b-a847-e13f61aa0ce9\") " pod="swift-kuttl-tests/root-account-create-update-gmb7x" Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.238089 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74qnb\" (UniqueName: \"kubernetes.io/projected/5a95fd7f-8f31-420b-a847-e13f61aa0ce9-kube-api-access-74qnb\") pod \"root-account-create-update-gmb7x\" (UID: \"5a95fd7f-8f31-420b-a847-e13f61aa0ce9\") " pod="swift-kuttl-tests/root-account-create-update-gmb7x" Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.365632 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-gmb7x" Feb 01 07:35:12 crc kubenswrapper[4835]: I0201 07:35:12.884823 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/root-account-create-update-gmb7x"] Feb 01 07:35:12 crc kubenswrapper[4835]: W0201 07:35:12.959919 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a95fd7f_8f31_420b_a847_e13f61aa0ce9.slice/crio-7226295b3efc2f725a477177a12ac9d338f0a03987fdfaa8e30d0b9203079cf4 WatchSource:0}: Error finding container 7226295b3efc2f725a477177a12ac9d338f0a03987fdfaa8e30d0b9203079cf4: Status 404 returned error can't find the container with id 7226295b3efc2f725a477177a12ac9d338f0a03987fdfaa8e30d0b9203079cf4 Feb 01 07:35:13 crc kubenswrapper[4835]: I0201 07:35:13.179699 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/root-account-create-update-gmb7x" event={"ID":"5a95fd7f-8f31-420b-a847-e13f61aa0ce9","Type":"ContainerStarted","Data":"4150461df03e979f73af252c924d3235e5873da5e6ee9fff2b41bd3c4a7515a0"} Feb 01 07:35:13 crc kubenswrapper[4835]: I0201 07:35:13.179961 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/root-account-create-update-gmb7x" event={"ID":"5a95fd7f-8f31-420b-a847-e13f61aa0ce9","Type":"ContainerStarted","Data":"7226295b3efc2f725a477177a12ac9d338f0a03987fdfaa8e30d0b9203079cf4"} Feb 01 07:35:13 crc kubenswrapper[4835]: I0201 07:35:13.197486 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/root-account-create-update-gmb7x" podStartSLOduration=1.197469865 podStartE2EDuration="1.197469865s" podCreationTimestamp="2026-02-01 07:35:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:35:13.194920638 +0000 UTC m=+786.315357072" watchObservedRunningTime="2026-02-01 07:35:13.197469865 +0000 UTC m=+786.317906299" Feb 01 07:35:13 crc kubenswrapper[4835]: I0201 07:35:13.449993 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/openstack-galera-2" podUID="f271d73a-6ed8-4c97-b087-c6b3287c11e4" containerName="galera" probeResult="failure" output=< Feb 01 07:35:13 crc kubenswrapper[4835]: wsrep_local_state_comment (Donor/Desynced) differs from Synced Feb 01 07:35:13 crc kubenswrapper[4835]: > Feb 01 07:35:14 crc kubenswrapper[4835]: I0201 07:35:14.574222 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:35:14 crc kubenswrapper[4835]: I0201 07:35:14.665455 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/openstack-galera-0" Feb 01 07:35:15 crc kubenswrapper[4835]: I0201 07:35:15.190177 4835 generic.go:334] "Generic (PLEG): container finished" podID="5a95fd7f-8f31-420b-a847-e13f61aa0ce9" containerID="4150461df03e979f73af252c924d3235e5873da5e6ee9fff2b41bd3c4a7515a0" exitCode=0 Feb 01 07:35:15 crc kubenswrapper[4835]: I0201 07:35:15.190209 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/root-account-create-update-gmb7x" event={"ID":"5a95fd7f-8f31-420b-a847-e13f61aa0ce9","Type":"ContainerDied","Data":"4150461df03e979f73af252c924d3235e5873da5e6ee9fff2b41bd3c4a7515a0"} Feb 01 07:35:16 crc kubenswrapper[4835]: I0201 07:35:16.199489 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp8" event={"ID":"be408dba-dcbf-40e4-9b83-cd67424ad82d","Type":"ContainerStarted","Data":"42e61bae0028233cd887c1a2c734dd4cb60bba1e5e9473c8b0715142c0adab43"} Feb 01 07:35:16 crc kubenswrapper[4835]: I0201 07:35:16.213090 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp8" podStartSLOduration=1.222088192 podStartE2EDuration="6.213070083s" podCreationTimestamp="2026-02-01 07:35:10 +0000 UTC" firstStartedPulling="2026-02-01 07:35:10.88331093 +0000 UTC m=+784.003747364" lastFinishedPulling="2026-02-01 07:35:15.874292821 +0000 UTC m=+788.994729255" observedRunningTime="2026-02-01 07:35:16.211874432 +0000 UTC m=+789.332310866" watchObservedRunningTime="2026-02-01 07:35:16.213070083 +0000 UTC m=+789.333506517" Feb 01 07:35:16 crc kubenswrapper[4835]: I0201 07:35:16.554939 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-gmb7x" Feb 01 07:35:16 crc kubenswrapper[4835]: I0201 07:35:16.689520 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a95fd7f-8f31-420b-a847-e13f61aa0ce9-operator-scripts\") pod \"5a95fd7f-8f31-420b-a847-e13f61aa0ce9\" (UID: \"5a95fd7f-8f31-420b-a847-e13f61aa0ce9\") " Feb 01 07:35:16 crc kubenswrapper[4835]: I0201 07:35:16.689607 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74qnb\" (UniqueName: \"kubernetes.io/projected/5a95fd7f-8f31-420b-a847-e13f61aa0ce9-kube-api-access-74qnb\") pod \"5a95fd7f-8f31-420b-a847-e13f61aa0ce9\" (UID: \"5a95fd7f-8f31-420b-a847-e13f61aa0ce9\") " Feb 01 07:35:16 crc kubenswrapper[4835]: I0201 07:35:16.690206 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a95fd7f-8f31-420b-a847-e13f61aa0ce9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a95fd7f-8f31-420b-a847-e13f61aa0ce9" (UID: "5a95fd7f-8f31-420b-a847-e13f61aa0ce9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:35:16 crc kubenswrapper[4835]: I0201 07:35:16.700160 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a95fd7f-8f31-420b-a847-e13f61aa0ce9-kube-api-access-74qnb" (OuterVolumeSpecName: "kube-api-access-74qnb") pod "5a95fd7f-8f31-420b-a847-e13f61aa0ce9" (UID: "5a95fd7f-8f31-420b-a847-e13f61aa0ce9"). InnerVolumeSpecName "kube-api-access-74qnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:35:16 crc kubenswrapper[4835]: I0201 07:35:16.792070 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a95fd7f-8f31-420b-a847-e13f61aa0ce9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 07:35:16 crc kubenswrapper[4835]: I0201 07:35:16.792128 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74qnb\" (UniqueName: \"kubernetes.io/projected/5a95fd7f-8f31-420b-a847-e13f61aa0ce9-kube-api-access-74qnb\") on node \"crc\" DevicePath \"\"" Feb 01 07:35:17 crc kubenswrapper[4835]: I0201 07:35:17.209503 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-gmb7x" Feb 01 07:35:17 crc kubenswrapper[4835]: I0201 07:35:17.209525 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/root-account-create-update-gmb7x" event={"ID":"5a95fd7f-8f31-420b-a847-e13f61aa0ce9","Type":"ContainerDied","Data":"7226295b3efc2f725a477177a12ac9d338f0a03987fdfaa8e30d0b9203079cf4"} Feb 01 07:35:17 crc kubenswrapper[4835]: I0201 07:35:17.209601 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7226295b3efc2f725a477177a12ac9d338f0a03987fdfaa8e30d0b9203079cf4" Feb 01 07:35:17 crc kubenswrapper[4835]: I0201 07:35:17.778265 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/memcached-0" Feb 01 07:35:18 crc kubenswrapper[4835]: I0201 07:35:18.152112 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:35:18 crc kubenswrapper[4835]: I0201 07:35:18.246730 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/openstack-galera-1" Feb 01 07:35:20 crc kubenswrapper[4835]: I0201 07:35:20.650078 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp8" Feb 01 07:35:20 crc kubenswrapper[4835]: I0201 07:35:20.652163 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp8" Feb 01 07:35:20 crc kubenswrapper[4835]: I0201 07:35:20.684152 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp8" Feb 01 07:35:21 crc kubenswrapper[4835]: I0201 07:35:21.271790 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/rabbitmq-cluster-operator-index-nztp8" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.378684 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k"] Feb 01 07:35:23 crc kubenswrapper[4835]: E0201 07:35:23.381299 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a95fd7f-8f31-420b-a847-e13f61aa0ce9" containerName="mariadb-account-create-update" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.381313 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a95fd7f-8f31-420b-a847-e13f61aa0ce9" containerName="mariadb-account-create-update" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.381469 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a95fd7f-8f31-420b-a847-e13f61aa0ce9" containerName="mariadb-account-create-update" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.382866 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.386651 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-j4xxm" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.411778 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k"] Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.513590 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59f26b1b-b8b2-4479-8e35-a7a46c629d35-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k\" (UID: \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.513678 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cnqd\" (UniqueName: \"kubernetes.io/projected/59f26b1b-b8b2-4479-8e35-a7a46c629d35-kube-api-access-9cnqd\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k\" (UID: \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.513722 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59f26b1b-b8b2-4479-8e35-a7a46c629d35-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k\" (UID: \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.615582 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59f26b1b-b8b2-4479-8e35-a7a46c629d35-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k\" (UID: \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.615688 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cnqd\" (UniqueName: \"kubernetes.io/projected/59f26b1b-b8b2-4479-8e35-a7a46c629d35-kube-api-access-9cnqd\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k\" (UID: \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.615755 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59f26b1b-b8b2-4479-8e35-a7a46c629d35-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k\" (UID: \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.616816 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59f26b1b-b8b2-4479-8e35-a7a46c629d35-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k\" (UID: \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.616861 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59f26b1b-b8b2-4479-8e35-a7a46c629d35-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k\" (UID: \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.650276 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cnqd\" (UniqueName: \"kubernetes.io/projected/59f26b1b-b8b2-4479-8e35-a7a46c629d35-kube-api-access-9cnqd\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k\" (UID: \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.710107 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" Feb 01 07:35:23 crc kubenswrapper[4835]: I0201 07:35:23.989367 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k"] Feb 01 07:35:23 crc kubenswrapper[4835]: W0201 07:35:23.992562 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59f26b1b_b8b2_4479_8e35_a7a46c629d35.slice/crio-d47765a40c02714a5797150b68058170a7f60687a3931c4b7538eea401edae64 WatchSource:0}: Error finding container d47765a40c02714a5797150b68058170a7f60687a3931c4b7538eea401edae64: Status 404 returned error can't find the container with id d47765a40c02714a5797150b68058170a7f60687a3931c4b7538eea401edae64 Feb 01 07:35:24 crc kubenswrapper[4835]: I0201 07:35:24.272342 4835 generic.go:334] "Generic (PLEG): container finished" podID="59f26b1b-b8b2-4479-8e35-a7a46c629d35" containerID="b6026b1967a0afc8e6eaed5606a24b459a9c02ffdc13c1973a9ff9e81ba50c34" exitCode=0 Feb 01 07:35:24 crc kubenswrapper[4835]: I0201 07:35:24.272611 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" event={"ID":"59f26b1b-b8b2-4479-8e35-a7a46c629d35","Type":"ContainerDied","Data":"b6026b1967a0afc8e6eaed5606a24b459a9c02ffdc13c1973a9ff9e81ba50c34"} Feb 01 07:35:24 crc kubenswrapper[4835]: I0201 07:35:24.272709 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" event={"ID":"59f26b1b-b8b2-4479-8e35-a7a46c629d35","Type":"ContainerStarted","Data":"d47765a40c02714a5797150b68058170a7f60687a3931c4b7538eea401edae64"} Feb 01 07:35:25 crc kubenswrapper[4835]: I0201 07:35:25.191973 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:35:25 crc kubenswrapper[4835]: I0201 07:35:25.192057 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:35:25 crc kubenswrapper[4835]: I0201 07:35:25.192113 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:35:25 crc kubenswrapper[4835]: I0201 07:35:25.192924 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6da4a09917e14a43c6af10d69dcc7ba3d2cd41146e8c294ea85744f0374d0efa"} pod="openshift-machine-config-operator/machine-config-daemon-wdt78" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 07:35:25 crc kubenswrapper[4835]: I0201 07:35:25.193023 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" containerID="cri-o://6da4a09917e14a43c6af10d69dcc7ba3d2cd41146e8c294ea85744f0374d0efa" gracePeriod=600 Feb 01 07:35:25 crc kubenswrapper[4835]: I0201 07:35:25.284499 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" event={"ID":"59f26b1b-b8b2-4479-8e35-a7a46c629d35","Type":"ContainerStarted","Data":"ea63aa4f3bbb37aa8d4856c69bcaaac3274e2ae13b60a20b3975aa15031337de"} Feb 01 07:35:26 crc kubenswrapper[4835]: I0201 07:35:26.299626 4835 generic.go:334] "Generic (PLEG): container finished" podID="303c450e-4b2d-4908-84e6-df8b444ed640" containerID="6da4a09917e14a43c6af10d69dcc7ba3d2cd41146e8c294ea85744f0374d0efa" exitCode=0 Feb 01 07:35:26 crc kubenswrapper[4835]: I0201 07:35:26.299691 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerDied","Data":"6da4a09917e14a43c6af10d69dcc7ba3d2cd41146e8c294ea85744f0374d0efa"} Feb 01 07:35:26 crc kubenswrapper[4835]: I0201 07:35:26.300059 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"9ccb60f81487a17626bf941abb39b090063342e92bdcf8f103587fb1912c3a05"} Feb 01 07:35:26 crc kubenswrapper[4835]: I0201 07:35:26.300093 4835 scope.go:117] "RemoveContainer" containerID="377901096f8562233e3d8083b0c24e7e0a643028b79ddd39edcc7cb8ec54319f" Feb 01 07:35:26 crc kubenswrapper[4835]: I0201 07:35:26.304547 4835 generic.go:334] "Generic (PLEG): container finished" podID="59f26b1b-b8b2-4479-8e35-a7a46c629d35" containerID="ea63aa4f3bbb37aa8d4856c69bcaaac3274e2ae13b60a20b3975aa15031337de" exitCode=0 Feb 01 07:35:26 crc kubenswrapper[4835]: I0201 07:35:26.304586 4835 generic.go:334] "Generic (PLEG): container finished" podID="59f26b1b-b8b2-4479-8e35-a7a46c629d35" containerID="cf68852754c97d96b1e5ebd1c69c8edb15576653503f5a13562894c2eb5b15f5" exitCode=0 Feb 01 07:35:26 crc kubenswrapper[4835]: I0201 07:35:26.304695 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" event={"ID":"59f26b1b-b8b2-4479-8e35-a7a46c629d35","Type":"ContainerDied","Data":"ea63aa4f3bbb37aa8d4856c69bcaaac3274e2ae13b60a20b3975aa15031337de"} Feb 01 07:35:26 crc kubenswrapper[4835]: I0201 07:35:26.304740 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" event={"ID":"59f26b1b-b8b2-4479-8e35-a7a46c629d35","Type":"ContainerDied","Data":"cf68852754c97d96b1e5ebd1c69c8edb15576653503f5a13562894c2eb5b15f5"} Feb 01 07:35:27 crc kubenswrapper[4835]: I0201 07:35:27.705719 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" Feb 01 07:35:27 crc kubenswrapper[4835]: I0201 07:35:27.780907 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cnqd\" (UniqueName: \"kubernetes.io/projected/59f26b1b-b8b2-4479-8e35-a7a46c629d35-kube-api-access-9cnqd\") pod \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\" (UID: \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\") " Feb 01 07:35:27 crc kubenswrapper[4835]: I0201 07:35:27.780976 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59f26b1b-b8b2-4479-8e35-a7a46c629d35-bundle\") pod \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\" (UID: \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\") " Feb 01 07:35:27 crc kubenswrapper[4835]: I0201 07:35:27.781027 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59f26b1b-b8b2-4479-8e35-a7a46c629d35-util\") pod \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\" (UID: \"59f26b1b-b8b2-4479-8e35-a7a46c629d35\") " Feb 01 07:35:27 crc kubenswrapper[4835]: I0201 07:35:27.782054 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59f26b1b-b8b2-4479-8e35-a7a46c629d35-bundle" (OuterVolumeSpecName: "bundle") pod "59f26b1b-b8b2-4479-8e35-a7a46c629d35" (UID: "59f26b1b-b8b2-4479-8e35-a7a46c629d35"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:35:27 crc kubenswrapper[4835]: I0201 07:35:27.793076 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59f26b1b-b8b2-4479-8e35-a7a46c629d35-kube-api-access-9cnqd" (OuterVolumeSpecName: "kube-api-access-9cnqd") pod "59f26b1b-b8b2-4479-8e35-a7a46c629d35" (UID: "59f26b1b-b8b2-4479-8e35-a7a46c629d35"). InnerVolumeSpecName "kube-api-access-9cnqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:35:27 crc kubenswrapper[4835]: I0201 07:35:27.805591 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59f26b1b-b8b2-4479-8e35-a7a46c629d35-util" (OuterVolumeSpecName: "util") pod "59f26b1b-b8b2-4479-8e35-a7a46c629d35" (UID: "59f26b1b-b8b2-4479-8e35-a7a46c629d35"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:35:27 crc kubenswrapper[4835]: I0201 07:35:27.882785 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cnqd\" (UniqueName: \"kubernetes.io/projected/59f26b1b-b8b2-4479-8e35-a7a46c629d35-kube-api-access-9cnqd\") on node \"crc\" DevicePath \"\"" Feb 01 07:35:27 crc kubenswrapper[4835]: I0201 07:35:27.883229 4835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/59f26b1b-b8b2-4479-8e35-a7a46c629d35-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:35:27 crc kubenswrapper[4835]: I0201 07:35:27.883248 4835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/59f26b1b-b8b2-4479-8e35-a7a46c629d35-util\") on node \"crc\" DevicePath \"\"" Feb 01 07:35:28 crc kubenswrapper[4835]: I0201 07:35:28.328292 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" event={"ID":"59f26b1b-b8b2-4479-8e35-a7a46c629d35","Type":"ContainerDied","Data":"d47765a40c02714a5797150b68058170a7f60687a3931c4b7538eea401edae64"} Feb 01 07:35:28 crc kubenswrapper[4835]: I0201 07:35:28.328353 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d47765a40c02714a5797150b68058170a7f60687a3931c4b7538eea401edae64" Feb 01 07:35:28 crc kubenswrapper[4835]: I0201 07:35:28.328388 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k" Feb 01 07:35:28 crc kubenswrapper[4835]: E0201 07:35:28.475395 4835 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59f26b1b_b8b2_4479_8e35_a7a46c629d35.slice\": RecentStats: unable to find data in memory cache]" Feb 01 07:35:35 crc kubenswrapper[4835]: I0201 07:35:35.302007 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-fhcz9"] Feb 01 07:35:35 crc kubenswrapper[4835]: E0201 07:35:35.302599 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f26b1b-b8b2-4479-8e35-a7a46c629d35" containerName="util" Feb 01 07:35:35 crc kubenswrapper[4835]: I0201 07:35:35.302610 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f26b1b-b8b2-4479-8e35-a7a46c629d35" containerName="util" Feb 01 07:35:35 crc kubenswrapper[4835]: E0201 07:35:35.302620 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f26b1b-b8b2-4479-8e35-a7a46c629d35" containerName="extract" Feb 01 07:35:35 crc kubenswrapper[4835]: I0201 07:35:35.302626 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f26b1b-b8b2-4479-8e35-a7a46c629d35" containerName="extract" Feb 01 07:35:35 crc kubenswrapper[4835]: E0201 07:35:35.302637 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f26b1b-b8b2-4479-8e35-a7a46c629d35" containerName="pull" Feb 01 07:35:35 crc kubenswrapper[4835]: I0201 07:35:35.302642 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f26b1b-b8b2-4479-8e35-a7a46c629d35" containerName="pull" Feb 01 07:35:35 crc kubenswrapper[4835]: I0201 07:35:35.302742 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="59f26b1b-b8b2-4479-8e35-a7a46c629d35" containerName="extract" Feb 01 07:35:35 crc kubenswrapper[4835]: I0201 07:35:35.303155 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fhcz9" Feb 01 07:35:35 crc kubenswrapper[4835]: I0201 07:35:35.304874 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-dockercfg-xnddj" Feb 01 07:35:35 crc kubenswrapper[4835]: I0201 07:35:35.313728 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-fhcz9"] Feb 01 07:35:35 crc kubenswrapper[4835]: I0201 07:35:35.387812 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctsjp\" (UniqueName: \"kubernetes.io/projected/b76bd603-252c-4c26-a1c7-0009be5661be-kube-api-access-ctsjp\") pod \"rabbitmq-cluster-operator-779fc9694b-fhcz9\" (UID: \"b76bd603-252c-4c26-a1c7-0009be5661be\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fhcz9" Feb 01 07:35:35 crc kubenswrapper[4835]: I0201 07:35:35.489319 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctsjp\" (UniqueName: \"kubernetes.io/projected/b76bd603-252c-4c26-a1c7-0009be5661be-kube-api-access-ctsjp\") pod \"rabbitmq-cluster-operator-779fc9694b-fhcz9\" (UID: \"b76bd603-252c-4c26-a1c7-0009be5661be\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fhcz9" Feb 01 07:35:35 crc kubenswrapper[4835]: I0201 07:35:35.515611 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctsjp\" (UniqueName: \"kubernetes.io/projected/b76bd603-252c-4c26-a1c7-0009be5661be-kube-api-access-ctsjp\") pod \"rabbitmq-cluster-operator-779fc9694b-fhcz9\" (UID: \"b76bd603-252c-4c26-a1c7-0009be5661be\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fhcz9" Feb 01 07:35:35 crc kubenswrapper[4835]: I0201 07:35:35.618727 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fhcz9" Feb 01 07:35:36 crc kubenswrapper[4835]: I0201 07:35:36.109021 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-fhcz9"] Feb 01 07:35:36 crc kubenswrapper[4835]: W0201 07:35:36.122845 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb76bd603_252c_4c26_a1c7_0009be5661be.slice/crio-c20e11167610d595b84c87f6d87f2aff893c291cf02c6ccd9268cebab5799fe2 WatchSource:0}: Error finding container c20e11167610d595b84c87f6d87f2aff893c291cf02c6ccd9268cebab5799fe2: Status 404 returned error can't find the container with id c20e11167610d595b84c87f6d87f2aff893c291cf02c6ccd9268cebab5799fe2 Feb 01 07:35:36 crc kubenswrapper[4835]: I0201 07:35:36.398570 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fhcz9" event={"ID":"b76bd603-252c-4c26-a1c7-0009be5661be","Type":"ContainerStarted","Data":"c20e11167610d595b84c87f6d87f2aff893c291cf02c6ccd9268cebab5799fe2"} Feb 01 07:35:39 crc kubenswrapper[4835]: I0201 07:35:39.424580 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fhcz9" event={"ID":"b76bd603-252c-4c26-a1c7-0009be5661be","Type":"ContainerStarted","Data":"5eccc636e49f64cb1c17047d447a67c1b14712efb95f7605cd69bf445160c6d7"} Feb 01 07:35:39 crc kubenswrapper[4835]: I0201 07:35:39.456793 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fhcz9" podStartSLOduration=1.441646446 podStartE2EDuration="4.456759651s" podCreationTimestamp="2026-02-01 07:35:35 +0000 UTC" firstStartedPulling="2026-02-01 07:35:36.125652343 +0000 UTC m=+809.246088777" lastFinishedPulling="2026-02-01 07:35:39.140765498 +0000 UTC m=+812.261201982" observedRunningTime="2026-02-01 07:35:39.446030438 +0000 UTC m=+812.566466902" watchObservedRunningTime="2026-02-01 07:35:39.456759651 +0000 UTC m=+812.577196125" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.731834 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.733660 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.736865 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"rabbitmq-server-dockercfg-ztvxx" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.737088 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"rabbitmq-default-user" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.737190 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"rabbitmq-erlang-cookie" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.737280 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"rabbitmq-plugins-conf" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.737227 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"rabbitmq-server-conf" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.762062 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.796995 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.797046 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.797077 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2545e0fa-e917-41bc-8b2b-61167eea613d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2545e0fa-e917-41bc-8b2b-61167eea613d\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.797096 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.797144 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.797160 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k9mm\" (UniqueName: \"kubernetes.io/projected/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-kube-api-access-9k9mm\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.797195 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.797232 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.898509 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2545e0fa-e917-41bc-8b2b-61167eea613d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2545e0fa-e917-41bc-8b2b-61167eea613d\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.898577 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.898622 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.898658 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k9mm\" (UniqueName: \"kubernetes.io/projected/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-kube-api-access-9k9mm\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.898725 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.898799 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.898871 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.898922 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.900649 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.900695 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.901033 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.901964 4835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.902033 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2545e0fa-e917-41bc-8b2b-61167eea613d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2545e0fa-e917-41bc-8b2b-61167eea613d\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fe67dcb5fd9741690176c772121471f4cbb81a238dd7982ba8fc34196e18fb2b/globalmount\"" pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.909069 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.911948 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.920370 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.922040 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k9mm\" (UniqueName: \"kubernetes.io/projected/34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e-kube-api-access-9k9mm\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:41 crc kubenswrapper[4835]: I0201 07:35:41.934879 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2545e0fa-e917-41bc-8b2b-61167eea613d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2545e0fa-e917-41bc-8b2b-61167eea613d\") pod \"rabbitmq-server-0\" (UID: \"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e\") " pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:42 crc kubenswrapper[4835]: I0201 07:35:42.062805 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:35:42 crc kubenswrapper[4835]: I0201 07:35:42.541467 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Feb 01 07:35:42 crc kubenswrapper[4835]: W0201 07:35:42.554010 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34e38bb1_d3dc_46d8_8b2d_8cc583a0a70e.slice/crio-97eecd711505cc5e999b9ae04d7f8884fe5fbf848cb06ab0e2d678fd57c85861 WatchSource:0}: Error finding container 97eecd711505cc5e999b9ae04d7f8884fe5fbf848cb06ab0e2d678fd57c85861: Status 404 returned error can't find the container with id 97eecd711505cc5e999b9ae04d7f8884fe5fbf848cb06ab0e2d678fd57c85861 Feb 01 07:35:42 crc kubenswrapper[4835]: I0201 07:35:42.714683 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-index-6hv5l"] Feb 01 07:35:42 crc kubenswrapper[4835]: I0201 07:35:42.715475 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-6hv5l" Feb 01 07:35:42 crc kubenswrapper[4835]: I0201 07:35:42.718439 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-index-dockercfg-pq6mc" Feb 01 07:35:42 crc kubenswrapper[4835]: I0201 07:35:42.734108 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-6hv5l"] Feb 01 07:35:42 crc kubenswrapper[4835]: I0201 07:35:42.810930 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwlzx\" (UniqueName: \"kubernetes.io/projected/09002d70-8878-4f31-bc75-ddf7378a8564-kube-api-access-fwlzx\") pod \"keystone-operator-index-6hv5l\" (UID: \"09002d70-8878-4f31-bc75-ddf7378a8564\") " pod="openstack-operators/keystone-operator-index-6hv5l" Feb 01 07:35:42 crc kubenswrapper[4835]: I0201 07:35:42.912065 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwlzx\" (UniqueName: \"kubernetes.io/projected/09002d70-8878-4f31-bc75-ddf7378a8564-kube-api-access-fwlzx\") pod \"keystone-operator-index-6hv5l\" (UID: \"09002d70-8878-4f31-bc75-ddf7378a8564\") " pod="openstack-operators/keystone-operator-index-6hv5l" Feb 01 07:35:42 crc kubenswrapper[4835]: I0201 07:35:42.942144 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwlzx\" (UniqueName: \"kubernetes.io/projected/09002d70-8878-4f31-bc75-ddf7378a8564-kube-api-access-fwlzx\") pod \"keystone-operator-index-6hv5l\" (UID: \"09002d70-8878-4f31-bc75-ddf7378a8564\") " pod="openstack-operators/keystone-operator-index-6hv5l" Feb 01 07:35:43 crc kubenswrapper[4835]: I0201 07:35:43.040081 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-6hv5l" Feb 01 07:35:43 crc kubenswrapper[4835]: I0201 07:35:43.457664 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e","Type":"ContainerStarted","Data":"97eecd711505cc5e999b9ae04d7f8884fe5fbf848cb06ab0e2d678fd57c85861"} Feb 01 07:35:43 crc kubenswrapper[4835]: I0201 07:35:43.465338 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-6hv5l"] Feb 01 07:35:43 crc kubenswrapper[4835]: W0201 07:35:43.476587 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09002d70_8878_4f31_bc75_ddf7378a8564.slice/crio-5106841f6f36b31582e2023703f39b197a96342d9ad65d9a77171b5d90a1c805 WatchSource:0}: Error finding container 5106841f6f36b31582e2023703f39b197a96342d9ad65d9a77171b5d90a1c805: Status 404 returned error can't find the container with id 5106841f6f36b31582e2023703f39b197a96342d9ad65d9a77171b5d90a1c805 Feb 01 07:35:44 crc kubenswrapper[4835]: I0201 07:35:44.463678 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-6hv5l" event={"ID":"09002d70-8878-4f31-bc75-ddf7378a8564","Type":"ContainerStarted","Data":"5106841f6f36b31582e2023703f39b197a96342d9ad65d9a77171b5d90a1c805"} Feb 01 07:35:49 crc kubenswrapper[4835]: I0201 07:35:49.514583 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-6hv5l" event={"ID":"09002d70-8878-4f31-bc75-ddf7378a8564","Type":"ContainerStarted","Data":"e73188334385a7e0e320e25ff5d163c112dac7b4f08f979d16347d097b566b46"} Feb 01 07:35:49 crc kubenswrapper[4835]: I0201 07:35:49.546319 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-index-6hv5l" podStartSLOduration=3.527454895 podStartE2EDuration="7.546289285s" podCreationTimestamp="2026-02-01 07:35:42 +0000 UTC" firstStartedPulling="2026-02-01 07:35:43.477829088 +0000 UTC m=+816.598265522" lastFinishedPulling="2026-02-01 07:35:47.496663478 +0000 UTC m=+820.617099912" observedRunningTime="2026-02-01 07:35:49.534665929 +0000 UTC m=+822.655102393" watchObservedRunningTime="2026-02-01 07:35:49.546289285 +0000 UTC m=+822.666725759" Feb 01 07:35:50 crc kubenswrapper[4835]: I0201 07:35:50.525487 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e","Type":"ContainerStarted","Data":"247ffa054aae7a8b1b3224a16b77460f26fe6817a4d71d43837f34ade749792d"} Feb 01 07:35:53 crc kubenswrapper[4835]: I0201 07:35:53.041094 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/keystone-operator-index-6hv5l" Feb 01 07:35:53 crc kubenswrapper[4835]: I0201 07:35:53.041577 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-index-6hv5l" Feb 01 07:35:53 crc kubenswrapper[4835]: I0201 07:35:53.083271 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/keystone-operator-index-6hv5l" Feb 01 07:35:53 crc kubenswrapper[4835]: I0201 07:35:53.596189 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-index-6hv5l" Feb 01 07:35:54 crc kubenswrapper[4835]: I0201 07:35:54.978950 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm"] Feb 01 07:35:54 crc kubenswrapper[4835]: I0201 07:35:54.981565 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" Feb 01 07:35:54 crc kubenswrapper[4835]: I0201 07:35:54.984225 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-j4xxm" Feb 01 07:35:54 crc kubenswrapper[4835]: I0201 07:35:54.989380 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm"] Feb 01 07:35:55 crc kubenswrapper[4835]: I0201 07:35:55.143916 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/667e6752-afe4-4918-9457-57c5eb1a6aae-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm\" (UID: \"667e6752-afe4-4918-9457-57c5eb1a6aae\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" Feb 01 07:35:55 crc kubenswrapper[4835]: I0201 07:35:55.144174 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l549n\" (UniqueName: \"kubernetes.io/projected/667e6752-afe4-4918-9457-57c5eb1a6aae-kube-api-access-l549n\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm\" (UID: \"667e6752-afe4-4918-9457-57c5eb1a6aae\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" Feb 01 07:35:55 crc kubenswrapper[4835]: I0201 07:35:55.144297 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/667e6752-afe4-4918-9457-57c5eb1a6aae-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm\" (UID: \"667e6752-afe4-4918-9457-57c5eb1a6aae\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" Feb 01 07:35:55 crc kubenswrapper[4835]: I0201 07:35:55.245977 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/667e6752-afe4-4918-9457-57c5eb1a6aae-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm\" (UID: \"667e6752-afe4-4918-9457-57c5eb1a6aae\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" Feb 01 07:35:55 crc kubenswrapper[4835]: I0201 07:35:55.246062 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/667e6752-afe4-4918-9457-57c5eb1a6aae-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm\" (UID: \"667e6752-afe4-4918-9457-57c5eb1a6aae\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" Feb 01 07:35:55 crc kubenswrapper[4835]: I0201 07:35:55.246114 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l549n\" (UniqueName: \"kubernetes.io/projected/667e6752-afe4-4918-9457-57c5eb1a6aae-kube-api-access-l549n\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm\" (UID: \"667e6752-afe4-4918-9457-57c5eb1a6aae\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" Feb 01 07:35:55 crc kubenswrapper[4835]: I0201 07:35:55.246873 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/667e6752-afe4-4918-9457-57c5eb1a6aae-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm\" (UID: \"667e6752-afe4-4918-9457-57c5eb1a6aae\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" Feb 01 07:35:55 crc kubenswrapper[4835]: I0201 07:35:55.247013 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/667e6752-afe4-4918-9457-57c5eb1a6aae-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm\" (UID: \"667e6752-afe4-4918-9457-57c5eb1a6aae\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" Feb 01 07:35:55 crc kubenswrapper[4835]: I0201 07:35:55.268887 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l549n\" (UniqueName: \"kubernetes.io/projected/667e6752-afe4-4918-9457-57c5eb1a6aae-kube-api-access-l549n\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm\" (UID: \"667e6752-afe4-4918-9457-57c5eb1a6aae\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" Feb 01 07:35:55 crc kubenswrapper[4835]: I0201 07:35:55.314720 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" Feb 01 07:35:55 crc kubenswrapper[4835]: I0201 07:35:55.787838 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm"] Feb 01 07:35:56 crc kubenswrapper[4835]: I0201 07:35:56.573838 4835 generic.go:334] "Generic (PLEG): container finished" podID="667e6752-afe4-4918-9457-57c5eb1a6aae" containerID="72bbaa515813b901a7d0ad68680c4decc5ce25b465f61b3ac1d95201f3bbc5ee" exitCode=0 Feb 01 07:35:56 crc kubenswrapper[4835]: I0201 07:35:56.574077 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" event={"ID":"667e6752-afe4-4918-9457-57c5eb1a6aae","Type":"ContainerDied","Data":"72bbaa515813b901a7d0ad68680c4decc5ce25b465f61b3ac1d95201f3bbc5ee"} Feb 01 07:35:56 crc kubenswrapper[4835]: I0201 07:35:56.574236 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" event={"ID":"667e6752-afe4-4918-9457-57c5eb1a6aae","Type":"ContainerStarted","Data":"6e9a23f5045cd6097995370de4c45763374c11683dd08b2135f996ba056f9f60"} Feb 01 07:35:57 crc kubenswrapper[4835]: I0201 07:35:57.582664 4835 generic.go:334] "Generic (PLEG): container finished" podID="667e6752-afe4-4918-9457-57c5eb1a6aae" containerID="793805b90b326aac75f9791b51156de1e873292c5f40bba477f6fd0cdfe721a4" exitCode=0 Feb 01 07:35:57 crc kubenswrapper[4835]: I0201 07:35:57.582867 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" event={"ID":"667e6752-afe4-4918-9457-57c5eb1a6aae","Type":"ContainerDied","Data":"793805b90b326aac75f9791b51156de1e873292c5f40bba477f6fd0cdfe721a4"} Feb 01 07:35:58 crc kubenswrapper[4835]: I0201 07:35:58.597241 4835 generic.go:334] "Generic (PLEG): container finished" podID="667e6752-afe4-4918-9457-57c5eb1a6aae" containerID="f91f6df3e3f1b5820feba7a26c52eece27a49db37c1bb83bc096d1bffa51331d" exitCode=0 Feb 01 07:35:58 crc kubenswrapper[4835]: I0201 07:35:58.597308 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" event={"ID":"667e6752-afe4-4918-9457-57c5eb1a6aae","Type":"ContainerDied","Data":"f91f6df3e3f1b5820feba7a26c52eece27a49db37c1bb83bc096d1bffa51331d"} Feb 01 07:35:59 crc kubenswrapper[4835]: I0201 07:35:59.940704 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" Feb 01 07:36:00 crc kubenswrapper[4835]: I0201 07:36:00.016548 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/667e6752-afe4-4918-9457-57c5eb1a6aae-bundle\") pod \"667e6752-afe4-4918-9457-57c5eb1a6aae\" (UID: \"667e6752-afe4-4918-9457-57c5eb1a6aae\") " Feb 01 07:36:00 crc kubenswrapper[4835]: I0201 07:36:00.017101 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l549n\" (UniqueName: \"kubernetes.io/projected/667e6752-afe4-4918-9457-57c5eb1a6aae-kube-api-access-l549n\") pod \"667e6752-afe4-4918-9457-57c5eb1a6aae\" (UID: \"667e6752-afe4-4918-9457-57c5eb1a6aae\") " Feb 01 07:36:00 crc kubenswrapper[4835]: I0201 07:36:00.017139 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/667e6752-afe4-4918-9457-57c5eb1a6aae-util\") pod \"667e6752-afe4-4918-9457-57c5eb1a6aae\" (UID: \"667e6752-afe4-4918-9457-57c5eb1a6aae\") " Feb 01 07:36:00 crc kubenswrapper[4835]: I0201 07:36:00.018169 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/667e6752-afe4-4918-9457-57c5eb1a6aae-bundle" (OuterVolumeSpecName: "bundle") pod "667e6752-afe4-4918-9457-57c5eb1a6aae" (UID: "667e6752-afe4-4918-9457-57c5eb1a6aae"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:36:00 crc kubenswrapper[4835]: I0201 07:36:00.023796 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/667e6752-afe4-4918-9457-57c5eb1a6aae-kube-api-access-l549n" (OuterVolumeSpecName: "kube-api-access-l549n") pod "667e6752-afe4-4918-9457-57c5eb1a6aae" (UID: "667e6752-afe4-4918-9457-57c5eb1a6aae"). InnerVolumeSpecName "kube-api-access-l549n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:36:00 crc kubenswrapper[4835]: I0201 07:36:00.049555 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/667e6752-afe4-4918-9457-57c5eb1a6aae-util" (OuterVolumeSpecName: "util") pod "667e6752-afe4-4918-9457-57c5eb1a6aae" (UID: "667e6752-afe4-4918-9457-57c5eb1a6aae"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:36:00 crc kubenswrapper[4835]: I0201 07:36:00.118813 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l549n\" (UniqueName: \"kubernetes.io/projected/667e6752-afe4-4918-9457-57c5eb1a6aae-kube-api-access-l549n\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:00 crc kubenswrapper[4835]: I0201 07:36:00.118871 4835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/667e6752-afe4-4918-9457-57c5eb1a6aae-util\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:00 crc kubenswrapper[4835]: I0201 07:36:00.118894 4835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/667e6752-afe4-4918-9457-57c5eb1a6aae-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:00 crc kubenswrapper[4835]: I0201 07:36:00.614752 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" event={"ID":"667e6752-afe4-4918-9457-57c5eb1a6aae","Type":"ContainerDied","Data":"6e9a23f5045cd6097995370de4c45763374c11683dd08b2135f996ba056f9f60"} Feb 01 07:36:00 crc kubenswrapper[4835]: I0201 07:36:00.615068 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e9a23f5045cd6097995370de4c45763374c11683dd08b2135f996ba056f9f60" Feb 01 07:36:00 crc kubenswrapper[4835]: I0201 07:36:00.614865 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.074246 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4"] Feb 01 07:36:12 crc kubenswrapper[4835]: E0201 07:36:12.075076 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="667e6752-afe4-4918-9457-57c5eb1a6aae" containerName="extract" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.075093 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="667e6752-afe4-4918-9457-57c5eb1a6aae" containerName="extract" Feb 01 07:36:12 crc kubenswrapper[4835]: E0201 07:36:12.075108 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="667e6752-afe4-4918-9457-57c5eb1a6aae" containerName="util" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.075117 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="667e6752-afe4-4918-9457-57c5eb1a6aae" containerName="util" Feb 01 07:36:12 crc kubenswrapper[4835]: E0201 07:36:12.075146 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="667e6752-afe4-4918-9457-57c5eb1a6aae" containerName="pull" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.075155 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="667e6752-afe4-4918-9457-57c5eb1a6aae" containerName="pull" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.075300 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="667e6752-afe4-4918-9457-57c5eb1a6aae" containerName="extract" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.075819 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.077995 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-service-cert" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.078165 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-k9cc8" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.083243 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4"] Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.246063 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84eb5c79-bae7-43b3-9b04-c949dc8c5ec4-apiservice-cert\") pod \"keystone-operator-controller-manager-7ddb6bb5f-7x7n4\" (UID: \"84eb5c79-bae7-43b3-9b04-c949dc8c5ec4\") " pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.246119 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjp6q\" (UniqueName: \"kubernetes.io/projected/84eb5c79-bae7-43b3-9b04-c949dc8c5ec4-kube-api-access-vjp6q\") pod \"keystone-operator-controller-manager-7ddb6bb5f-7x7n4\" (UID: \"84eb5c79-bae7-43b3-9b04-c949dc8c5ec4\") " pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.246169 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84eb5c79-bae7-43b3-9b04-c949dc8c5ec4-webhook-cert\") pod \"keystone-operator-controller-manager-7ddb6bb5f-7x7n4\" (UID: \"84eb5c79-bae7-43b3-9b04-c949dc8c5ec4\") " pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.347245 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84eb5c79-bae7-43b3-9b04-c949dc8c5ec4-webhook-cert\") pod \"keystone-operator-controller-manager-7ddb6bb5f-7x7n4\" (UID: \"84eb5c79-bae7-43b3-9b04-c949dc8c5ec4\") " pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.347452 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84eb5c79-bae7-43b3-9b04-c949dc8c5ec4-apiservice-cert\") pod \"keystone-operator-controller-manager-7ddb6bb5f-7x7n4\" (UID: \"84eb5c79-bae7-43b3-9b04-c949dc8c5ec4\") " pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.347502 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjp6q\" (UniqueName: \"kubernetes.io/projected/84eb5c79-bae7-43b3-9b04-c949dc8c5ec4-kube-api-access-vjp6q\") pod \"keystone-operator-controller-manager-7ddb6bb5f-7x7n4\" (UID: \"84eb5c79-bae7-43b3-9b04-c949dc8c5ec4\") " pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.353134 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84eb5c79-bae7-43b3-9b04-c949dc8c5ec4-apiservice-cert\") pod \"keystone-operator-controller-manager-7ddb6bb5f-7x7n4\" (UID: \"84eb5c79-bae7-43b3-9b04-c949dc8c5ec4\") " pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.360310 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84eb5c79-bae7-43b3-9b04-c949dc8c5ec4-webhook-cert\") pod \"keystone-operator-controller-manager-7ddb6bb5f-7x7n4\" (UID: \"84eb5c79-bae7-43b3-9b04-c949dc8c5ec4\") " pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.367013 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjp6q\" (UniqueName: \"kubernetes.io/projected/84eb5c79-bae7-43b3-9b04-c949dc8c5ec4-kube-api-access-vjp6q\") pod \"keystone-operator-controller-manager-7ddb6bb5f-7x7n4\" (UID: \"84eb5c79-bae7-43b3-9b04-c949dc8c5ec4\") " pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.395682 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" Feb 01 07:36:12 crc kubenswrapper[4835]: I0201 07:36:12.819715 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4"] Feb 01 07:36:12 crc kubenswrapper[4835]: W0201 07:36:12.830689 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84eb5c79_bae7_43b3_9b04_c949dc8c5ec4.slice/crio-7d9909bfe9dd457bb7ae9753ba46c183780f958b7d714c3d260bf4a705b2cde4 WatchSource:0}: Error finding container 7d9909bfe9dd457bb7ae9753ba46c183780f958b7d714c3d260bf4a705b2cde4: Status 404 returned error can't find the container with id 7d9909bfe9dd457bb7ae9753ba46c183780f958b7d714c3d260bf4a705b2cde4 Feb 01 07:36:13 crc kubenswrapper[4835]: I0201 07:36:13.711627 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" event={"ID":"84eb5c79-bae7-43b3-9b04-c949dc8c5ec4","Type":"ContainerStarted","Data":"7d9909bfe9dd457bb7ae9753ba46c183780f958b7d714c3d260bf4a705b2cde4"} Feb 01 07:36:16 crc kubenswrapper[4835]: I0201 07:36:16.731704 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" event={"ID":"84eb5c79-bae7-43b3-9b04-c949dc8c5ec4","Type":"ContainerStarted","Data":"209c2f8a7171f51cfdfc041099d5340638a4d272f8bd3a5c8320542fb7cb27f0"} Feb 01 07:36:16 crc kubenswrapper[4835]: I0201 07:36:16.732203 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" Feb 01 07:36:16 crc kubenswrapper[4835]: I0201 07:36:16.762605 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" podStartSLOduration=1.340389298 podStartE2EDuration="4.762581123s" podCreationTimestamp="2026-02-01 07:36:12 +0000 UTC" firstStartedPulling="2026-02-01 07:36:12.835499839 +0000 UTC m=+845.955936313" lastFinishedPulling="2026-02-01 07:36:16.257691704 +0000 UTC m=+849.378128138" observedRunningTime="2026-02-01 07:36:16.758014794 +0000 UTC m=+849.878451288" watchObservedRunningTime="2026-02-01 07:36:16.762581123 +0000 UTC m=+849.883017567" Feb 01 07:36:22 crc kubenswrapper[4835]: I0201 07:36:22.401877 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7ddb6bb5f-7x7n4" Feb 01 07:36:22 crc kubenswrapper[4835]: I0201 07:36:22.775221 4835 generic.go:334] "Generic (PLEG): container finished" podID="34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e" containerID="247ffa054aae7a8b1b3224a16b77460f26fe6817a4d71d43837f34ade749792d" exitCode=0 Feb 01 07:36:22 crc kubenswrapper[4835]: I0201 07:36:22.775300 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e","Type":"ContainerDied","Data":"247ffa054aae7a8b1b3224a16b77460f26fe6817a4d71d43837f34ade749792d"} Feb 01 07:36:23 crc kubenswrapper[4835]: I0201 07:36:23.784374 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e","Type":"ContainerStarted","Data":"19e8242448e511e78c6b154dd37c8b1a43d6098db208e09f1d7e0ef72e64e253"} Feb 01 07:36:23 crc kubenswrapper[4835]: I0201 07:36:23.785196 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:36:23 crc kubenswrapper[4835]: I0201 07:36:23.811637 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/rabbitmq-server-0" podStartSLOduration=37.984108617 podStartE2EDuration="43.811617155s" podCreationTimestamp="2026-02-01 07:35:40 +0000 UTC" firstStartedPulling="2026-02-01 07:35:42.557114048 +0000 UTC m=+815.677550472" lastFinishedPulling="2026-02-01 07:35:48.384622536 +0000 UTC m=+821.505059010" observedRunningTime="2026-02-01 07:36:23.805078965 +0000 UTC m=+856.925515399" watchObservedRunningTime="2026-02-01 07:36:23.811617155 +0000 UTC m=+856.932053589" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.111189 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-d22d-account-create-update-clkrg"] Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.112337 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-d22d-account-create-update-clkrg" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.115963 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-db-create-m9js9"] Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.116991 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-m9js9" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.118571 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-db-secret" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.132922 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-create-m9js9"] Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.145785 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-d22d-account-create-update-clkrg"] Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.262983 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f574f591-2220-4cd1-88f7-ac79ac332aae-operator-scripts\") pod \"keystone-db-create-m9js9\" (UID: \"f574f591-2220-4cd1-88f7-ac79ac332aae\") " pod="swift-kuttl-tests/keystone-db-create-m9js9" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.263249 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnqp4\" (UniqueName: \"kubernetes.io/projected/f574f591-2220-4cd1-88f7-ac79ac332aae-kube-api-access-mnqp4\") pod \"keystone-db-create-m9js9\" (UID: \"f574f591-2220-4cd1-88f7-ac79ac332aae\") " pod="swift-kuttl-tests/keystone-db-create-m9js9" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.263332 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2cqc\" (UniqueName: \"kubernetes.io/projected/766b4c0a-da92-4fe7-bf95-4a39f3fafafe-kube-api-access-w2cqc\") pod \"keystone-d22d-account-create-update-clkrg\" (UID: \"766b4c0a-da92-4fe7-bf95-4a39f3fafafe\") " pod="swift-kuttl-tests/keystone-d22d-account-create-update-clkrg" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.263434 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/766b4c0a-da92-4fe7-bf95-4a39f3fafafe-operator-scripts\") pod \"keystone-d22d-account-create-update-clkrg\" (UID: \"766b4c0a-da92-4fe7-bf95-4a39f3fafafe\") " pod="swift-kuttl-tests/keystone-d22d-account-create-update-clkrg" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.364649 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2cqc\" (UniqueName: \"kubernetes.io/projected/766b4c0a-da92-4fe7-bf95-4a39f3fafafe-kube-api-access-w2cqc\") pod \"keystone-d22d-account-create-update-clkrg\" (UID: \"766b4c0a-da92-4fe7-bf95-4a39f3fafafe\") " pod="swift-kuttl-tests/keystone-d22d-account-create-update-clkrg" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.364726 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/766b4c0a-da92-4fe7-bf95-4a39f3fafafe-operator-scripts\") pod \"keystone-d22d-account-create-update-clkrg\" (UID: \"766b4c0a-da92-4fe7-bf95-4a39f3fafafe\") " pod="swift-kuttl-tests/keystone-d22d-account-create-update-clkrg" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.364814 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f574f591-2220-4cd1-88f7-ac79ac332aae-operator-scripts\") pod \"keystone-db-create-m9js9\" (UID: \"f574f591-2220-4cd1-88f7-ac79ac332aae\") " pod="swift-kuttl-tests/keystone-db-create-m9js9" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.364852 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnqp4\" (UniqueName: \"kubernetes.io/projected/f574f591-2220-4cd1-88f7-ac79ac332aae-kube-api-access-mnqp4\") pod \"keystone-db-create-m9js9\" (UID: \"f574f591-2220-4cd1-88f7-ac79ac332aae\") " pod="swift-kuttl-tests/keystone-db-create-m9js9" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.365682 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f574f591-2220-4cd1-88f7-ac79ac332aae-operator-scripts\") pod \"keystone-db-create-m9js9\" (UID: \"f574f591-2220-4cd1-88f7-ac79ac332aae\") " pod="swift-kuttl-tests/keystone-db-create-m9js9" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.365748 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/766b4c0a-da92-4fe7-bf95-4a39f3fafafe-operator-scripts\") pod \"keystone-d22d-account-create-update-clkrg\" (UID: \"766b4c0a-da92-4fe7-bf95-4a39f3fafafe\") " pod="swift-kuttl-tests/keystone-d22d-account-create-update-clkrg" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.383125 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2cqc\" (UniqueName: \"kubernetes.io/projected/766b4c0a-da92-4fe7-bf95-4a39f3fafafe-kube-api-access-w2cqc\") pod \"keystone-d22d-account-create-update-clkrg\" (UID: \"766b4c0a-da92-4fe7-bf95-4a39f3fafafe\") " pod="swift-kuttl-tests/keystone-d22d-account-create-update-clkrg" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.385056 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnqp4\" (UniqueName: \"kubernetes.io/projected/f574f591-2220-4cd1-88f7-ac79ac332aae-kube-api-access-mnqp4\") pod \"keystone-db-create-m9js9\" (UID: \"f574f591-2220-4cd1-88f7-ac79ac332aae\") " pod="swift-kuttl-tests/keystone-db-create-m9js9" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.446770 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-d22d-account-create-update-clkrg" Feb 01 07:36:25 crc kubenswrapper[4835]: I0201 07:36:25.453144 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-m9js9" Feb 01 07:36:26 crc kubenswrapper[4835]: I0201 07:36:26.034862 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-d22d-account-create-update-clkrg"] Feb 01 07:36:26 crc kubenswrapper[4835]: W0201 07:36:26.042217 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod766b4c0a_da92_4fe7_bf95_4a39f3fafafe.slice/crio-f050955b679b3d41173f8715f8fffb201213503cbe8bc44f4e4442841d5e408c WatchSource:0}: Error finding container f050955b679b3d41173f8715f8fffb201213503cbe8bc44f4e4442841d5e408c: Status 404 returned error can't find the container with id f050955b679b3d41173f8715f8fffb201213503cbe8bc44f4e4442841d5e408c Feb 01 07:36:26 crc kubenswrapper[4835]: I0201 07:36:26.154032 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-create-m9js9"] Feb 01 07:36:26 crc kubenswrapper[4835]: W0201 07:36:26.161405 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf574f591_2220_4cd1_88f7_ac79ac332aae.slice/crio-116ea189d3c24ff88f9f58a9d2d496b057c28fec8050662fa6bda2519ef94929 WatchSource:0}: Error finding container 116ea189d3c24ff88f9f58a9d2d496b057c28fec8050662fa6bda2519ef94929: Status 404 returned error can't find the container with id 116ea189d3c24ff88f9f58a9d2d496b057c28fec8050662fa6bda2519ef94929 Feb 01 07:36:26 crc kubenswrapper[4835]: I0201 07:36:26.801987 4835 generic.go:334] "Generic (PLEG): container finished" podID="766b4c0a-da92-4fe7-bf95-4a39f3fafafe" containerID="215269eb271992c8cbc8e79c691e2434a7dce5223c9258cc1ad2fca20f897f92" exitCode=0 Feb 01 07:36:26 crc kubenswrapper[4835]: I0201 07:36:26.802046 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-d22d-account-create-update-clkrg" event={"ID":"766b4c0a-da92-4fe7-bf95-4a39f3fafafe","Type":"ContainerDied","Data":"215269eb271992c8cbc8e79c691e2434a7dce5223c9258cc1ad2fca20f897f92"} Feb 01 07:36:26 crc kubenswrapper[4835]: I0201 07:36:26.802071 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-d22d-account-create-update-clkrg" event={"ID":"766b4c0a-da92-4fe7-bf95-4a39f3fafafe","Type":"ContainerStarted","Data":"f050955b679b3d41173f8715f8fffb201213503cbe8bc44f4e4442841d5e408c"} Feb 01 07:36:26 crc kubenswrapper[4835]: I0201 07:36:26.803943 4835 generic.go:334] "Generic (PLEG): container finished" podID="f574f591-2220-4cd1-88f7-ac79ac332aae" containerID="fe725302a8ffa5be3e180ac6b253d15da455fbca578acdea4628b374a3cde003" exitCode=0 Feb 01 07:36:26 crc kubenswrapper[4835]: I0201 07:36:26.804023 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-create-m9js9" event={"ID":"f574f591-2220-4cd1-88f7-ac79ac332aae","Type":"ContainerDied","Data":"fe725302a8ffa5be3e180ac6b253d15da455fbca578acdea4628b374a3cde003"} Feb 01 07:36:26 crc kubenswrapper[4835]: I0201 07:36:26.804070 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-create-m9js9" event={"ID":"f574f591-2220-4cd1-88f7-ac79ac332aae","Type":"ContainerStarted","Data":"116ea189d3c24ff88f9f58a9d2d496b057c28fec8050662fa6bda2519ef94929"} Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.117242 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-index-fmwqp"] Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.118277 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-fmwqp" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.120368 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-index-dockercfg-8k4l7" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.140140 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-index-fmwqp"] Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.237393 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-m9js9" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.242226 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-d22d-account-create-update-clkrg" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.273265 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hsx9\" (UniqueName: \"kubernetes.io/projected/4fa5ae77-daab-43fa-b798-b9895f717e0a-kube-api-access-8hsx9\") pod \"barbican-operator-index-fmwqp\" (UID: \"4fa5ae77-daab-43fa-b798-b9895f717e0a\") " pod="openstack-operators/barbican-operator-index-fmwqp" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.373900 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2cqc\" (UniqueName: \"kubernetes.io/projected/766b4c0a-da92-4fe7-bf95-4a39f3fafafe-kube-api-access-w2cqc\") pod \"766b4c0a-da92-4fe7-bf95-4a39f3fafafe\" (UID: \"766b4c0a-da92-4fe7-bf95-4a39f3fafafe\") " Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.374042 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/766b4c0a-da92-4fe7-bf95-4a39f3fafafe-operator-scripts\") pod \"766b4c0a-da92-4fe7-bf95-4a39f3fafafe\" (UID: \"766b4c0a-da92-4fe7-bf95-4a39f3fafafe\") " Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.374087 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnqp4\" (UniqueName: \"kubernetes.io/projected/f574f591-2220-4cd1-88f7-ac79ac332aae-kube-api-access-mnqp4\") pod \"f574f591-2220-4cd1-88f7-ac79ac332aae\" (UID: \"f574f591-2220-4cd1-88f7-ac79ac332aae\") " Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.374966 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/766b4c0a-da92-4fe7-bf95-4a39f3fafafe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "766b4c0a-da92-4fe7-bf95-4a39f3fafafe" (UID: "766b4c0a-da92-4fe7-bf95-4a39f3fafafe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.375201 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f574f591-2220-4cd1-88f7-ac79ac332aae-operator-scripts\") pod \"f574f591-2220-4cd1-88f7-ac79ac332aae\" (UID: \"f574f591-2220-4cd1-88f7-ac79ac332aae\") " Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.375623 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f574f591-2220-4cd1-88f7-ac79ac332aae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f574f591-2220-4cd1-88f7-ac79ac332aae" (UID: "f574f591-2220-4cd1-88f7-ac79ac332aae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.375675 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hsx9\" (UniqueName: \"kubernetes.io/projected/4fa5ae77-daab-43fa-b798-b9895f717e0a-kube-api-access-8hsx9\") pod \"barbican-operator-index-fmwqp\" (UID: \"4fa5ae77-daab-43fa-b798-b9895f717e0a\") " pod="openstack-operators/barbican-operator-index-fmwqp" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.376632 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/766b4c0a-da92-4fe7-bf95-4a39f3fafafe-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.376667 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f574f591-2220-4cd1-88f7-ac79ac332aae-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.384703 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f574f591-2220-4cd1-88f7-ac79ac332aae-kube-api-access-mnqp4" (OuterVolumeSpecName: "kube-api-access-mnqp4") pod "f574f591-2220-4cd1-88f7-ac79ac332aae" (UID: "f574f591-2220-4cd1-88f7-ac79ac332aae"). InnerVolumeSpecName "kube-api-access-mnqp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.384812 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/766b4c0a-da92-4fe7-bf95-4a39f3fafafe-kube-api-access-w2cqc" (OuterVolumeSpecName: "kube-api-access-w2cqc") pod "766b4c0a-da92-4fe7-bf95-4a39f3fafafe" (UID: "766b4c0a-da92-4fe7-bf95-4a39f3fafafe"). InnerVolumeSpecName "kube-api-access-w2cqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.399527 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hsx9\" (UniqueName: \"kubernetes.io/projected/4fa5ae77-daab-43fa-b798-b9895f717e0a-kube-api-access-8hsx9\") pod \"barbican-operator-index-fmwqp\" (UID: \"4fa5ae77-daab-43fa-b798-b9895f717e0a\") " pod="openstack-operators/barbican-operator-index-fmwqp" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.435162 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-fmwqp" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.489833 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2cqc\" (UniqueName: \"kubernetes.io/projected/766b4c0a-da92-4fe7-bf95-4a39f3fafafe-kube-api-access-w2cqc\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.489865 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnqp4\" (UniqueName: \"kubernetes.io/projected/f574f591-2220-4cd1-88f7-ac79ac332aae-kube-api-access-mnqp4\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.818907 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-d22d-account-create-update-clkrg" event={"ID":"766b4c0a-da92-4fe7-bf95-4a39f3fafafe","Type":"ContainerDied","Data":"f050955b679b3d41173f8715f8fffb201213503cbe8bc44f4e4442841d5e408c"} Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.819217 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f050955b679b3d41173f8715f8fffb201213503cbe8bc44f4e4442841d5e408c" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.818926 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-d22d-account-create-update-clkrg" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.820630 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-create-m9js9" event={"ID":"f574f591-2220-4cd1-88f7-ac79ac332aae","Type":"ContainerDied","Data":"116ea189d3c24ff88f9f58a9d2d496b057c28fec8050662fa6bda2519ef94929"} Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.820658 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="116ea189d3c24ff88f9f58a9d2d496b057c28fec8050662fa6bda2519ef94929" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.820691 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-m9js9" Feb 01 07:36:28 crc kubenswrapper[4835]: I0201 07:36:28.910185 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-index-fmwqp"] Feb 01 07:36:29 crc kubenswrapper[4835]: I0201 07:36:29.833669 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-fmwqp" event={"ID":"4fa5ae77-daab-43fa-b798-b9895f717e0a","Type":"ContainerStarted","Data":"54a03cd57752b9215cc8a2e7918ca730a1757de3e169d7a8117c5684a6058844"} Feb 01 07:36:30 crc kubenswrapper[4835]: I0201 07:36:30.852485 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-fmwqp" event={"ID":"4fa5ae77-daab-43fa-b798-b9895f717e0a","Type":"ContainerStarted","Data":"2f82ecb9b26c9b5db30b43b7b808bf402cb55b491714a5b1ff685deee7aa0a06"} Feb 01 07:36:30 crc kubenswrapper[4835]: I0201 07:36:30.884014 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-index-fmwqp" podStartSLOduration=2.0696765 podStartE2EDuration="2.883979984s" podCreationTimestamp="2026-02-01 07:36:28 +0000 UTC" firstStartedPulling="2026-02-01 07:36:28.920379686 +0000 UTC m=+862.040816120" lastFinishedPulling="2026-02-01 07:36:29.73468317 +0000 UTC m=+862.855119604" observedRunningTime="2026-02-01 07:36:30.8746099 +0000 UTC m=+863.995046344" watchObservedRunningTime="2026-02-01 07:36:30.883979984 +0000 UTC m=+864.004416458" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.067404 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/rabbitmq-server-0" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.564250 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-db-sync-5w5sr"] Feb 01 07:36:32 crc kubenswrapper[4835]: E0201 07:36:32.564986 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f574f591-2220-4cd1-88f7-ac79ac332aae" containerName="mariadb-database-create" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.565004 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f574f591-2220-4cd1-88f7-ac79ac332aae" containerName="mariadb-database-create" Feb 01 07:36:32 crc kubenswrapper[4835]: E0201 07:36:32.565035 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="766b4c0a-da92-4fe7-bf95-4a39f3fafafe" containerName="mariadb-account-create-update" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.565042 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="766b4c0a-da92-4fe7-bf95-4a39f3fafafe" containerName="mariadb-account-create-update" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.565164 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f574f591-2220-4cd1-88f7-ac79ac332aae" containerName="mariadb-database-create" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.565177 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="766b4c0a-da92-4fe7-bf95-4a39f3fafafe" containerName="mariadb-account-create-update" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.565762 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-5w5sr" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.567848 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.568053 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-scripts" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.569073 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-keystone-dockercfg-hgb5p" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.571402 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-config-data" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.576219 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-5w5sr"] Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.662545 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89qxh\" (UniqueName: \"kubernetes.io/projected/cd1d09a3-13ff-43c0-835a-de9a6f9b5103-kube-api-access-89qxh\") pod \"keystone-db-sync-5w5sr\" (UID: \"cd1d09a3-13ff-43c0-835a-de9a6f9b5103\") " pod="swift-kuttl-tests/keystone-db-sync-5w5sr" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.662659 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd1d09a3-13ff-43c0-835a-de9a6f9b5103-config-data\") pod \"keystone-db-sync-5w5sr\" (UID: \"cd1d09a3-13ff-43c0-835a-de9a6f9b5103\") " pod="swift-kuttl-tests/keystone-db-sync-5w5sr" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.764383 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89qxh\" (UniqueName: \"kubernetes.io/projected/cd1d09a3-13ff-43c0-835a-de9a6f9b5103-kube-api-access-89qxh\") pod \"keystone-db-sync-5w5sr\" (UID: \"cd1d09a3-13ff-43c0-835a-de9a6f9b5103\") " pod="swift-kuttl-tests/keystone-db-sync-5w5sr" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.764837 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd1d09a3-13ff-43c0-835a-de9a6f9b5103-config-data\") pod \"keystone-db-sync-5w5sr\" (UID: \"cd1d09a3-13ff-43c0-835a-de9a6f9b5103\") " pod="swift-kuttl-tests/keystone-db-sync-5w5sr" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.770815 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd1d09a3-13ff-43c0-835a-de9a6f9b5103-config-data\") pod \"keystone-db-sync-5w5sr\" (UID: \"cd1d09a3-13ff-43c0-835a-de9a6f9b5103\") " pod="swift-kuttl-tests/keystone-db-sync-5w5sr" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.781381 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89qxh\" (UniqueName: \"kubernetes.io/projected/cd1d09a3-13ff-43c0-835a-de9a6f9b5103-kube-api-access-89qxh\") pod \"keystone-db-sync-5w5sr\" (UID: \"cd1d09a3-13ff-43c0-835a-de9a6f9b5103\") " pod="swift-kuttl-tests/keystone-db-sync-5w5sr" Feb 01 07:36:32 crc kubenswrapper[4835]: I0201 07:36:32.896809 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-5w5sr" Feb 01 07:36:33 crc kubenswrapper[4835]: I0201 07:36:33.217760 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-5w5sr"] Feb 01 07:36:33 crc kubenswrapper[4835]: W0201 07:36:33.227292 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd1d09a3_13ff_43c0_835a_de9a6f9b5103.slice/crio-bb07ccc83c8b95c7749afa42417dd2f772f0a6c5857837894045b51d53900cfe WatchSource:0}: Error finding container bb07ccc83c8b95c7749afa42417dd2f772f0a6c5857837894045b51d53900cfe: Status 404 returned error can't find the container with id bb07ccc83c8b95c7749afa42417dd2f772f0a6c5857837894045b51d53900cfe Feb 01 07:36:33 crc kubenswrapper[4835]: I0201 07:36:33.876393 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-5w5sr" event={"ID":"cd1d09a3-13ff-43c0-835a-de9a6f9b5103","Type":"ContainerStarted","Data":"bb07ccc83c8b95c7749afa42417dd2f772f0a6c5857837894045b51d53900cfe"} Feb 01 07:36:38 crc kubenswrapper[4835]: I0201 07:36:38.435650 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-index-fmwqp" Feb 01 07:36:38 crc kubenswrapper[4835]: I0201 07:36:38.436610 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/barbican-operator-index-fmwqp" Feb 01 07:36:38 crc kubenswrapper[4835]: I0201 07:36:38.476074 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/barbican-operator-index-fmwqp" Feb 01 07:36:38 crc kubenswrapper[4835]: I0201 07:36:38.962603 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-index-fmwqp" Feb 01 07:36:43 crc kubenswrapper[4835]: I0201 07:36:43.987475 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-5w5sr" event={"ID":"cd1d09a3-13ff-43c0-835a-de9a6f9b5103","Type":"ContainerStarted","Data":"a06f9b42349fa2ea28d87918e953134cff78d85714b4da730fc4895d65231d70"} Feb 01 07:36:44 crc kubenswrapper[4835]: I0201 07:36:44.020858 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/keystone-db-sync-5w5sr" podStartSLOduration=1.937893903 podStartE2EDuration="12.020831434s" podCreationTimestamp="2026-02-01 07:36:32 +0000 UTC" firstStartedPulling="2026-02-01 07:36:33.229955385 +0000 UTC m=+866.350391819" lastFinishedPulling="2026-02-01 07:36:43.312892876 +0000 UTC m=+876.433329350" observedRunningTime="2026-02-01 07:36:44.006529782 +0000 UTC m=+877.126966276" watchObservedRunningTime="2026-02-01 07:36:44.020831434 +0000 UTC m=+877.141267908" Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.012522 4835 generic.go:334] "Generic (PLEG): container finished" podID="cd1d09a3-13ff-43c0-835a-de9a6f9b5103" containerID="a06f9b42349fa2ea28d87918e953134cff78d85714b4da730fc4895d65231d70" exitCode=0 Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.012656 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-5w5sr" event={"ID":"cd1d09a3-13ff-43c0-835a-de9a6f9b5103","Type":"ContainerDied","Data":"a06f9b42349fa2ea28d87918e953134cff78d85714b4da730fc4895d65231d70"} Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.768984 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf"] Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.771189 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.773716 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-j4xxm" Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.779502 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf"] Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.859955 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34b15f05-4416-4999-ba8c-3bc64ada7f04-bundle\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf\" (UID: \"34b15f05-4416-4999-ba8c-3bc64ada7f04\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.860052 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34b15f05-4416-4999-ba8c-3bc64ada7f04-util\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf\" (UID: \"34b15f05-4416-4999-ba8c-3bc64ada7f04\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.860087 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s48t4\" (UniqueName: \"kubernetes.io/projected/34b15f05-4416-4999-ba8c-3bc64ada7f04-kube-api-access-s48t4\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf\" (UID: \"34b15f05-4416-4999-ba8c-3bc64ada7f04\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.961520 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34b15f05-4416-4999-ba8c-3bc64ada7f04-bundle\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf\" (UID: \"34b15f05-4416-4999-ba8c-3bc64ada7f04\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.961724 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34b15f05-4416-4999-ba8c-3bc64ada7f04-util\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf\" (UID: \"34b15f05-4416-4999-ba8c-3bc64ada7f04\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.961799 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s48t4\" (UniqueName: \"kubernetes.io/projected/34b15f05-4416-4999-ba8c-3bc64ada7f04-kube-api-access-s48t4\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf\" (UID: \"34b15f05-4416-4999-ba8c-3bc64ada7f04\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.962523 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34b15f05-4416-4999-ba8c-3bc64ada7f04-bundle\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf\" (UID: \"34b15f05-4416-4999-ba8c-3bc64ada7f04\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.962550 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34b15f05-4416-4999-ba8c-3bc64ada7f04-util\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf\" (UID: \"34b15f05-4416-4999-ba8c-3bc64ada7f04\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" Feb 01 07:36:47 crc kubenswrapper[4835]: I0201 07:36:47.999513 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s48t4\" (UniqueName: \"kubernetes.io/projected/34b15f05-4416-4999-ba8c-3bc64ada7f04-kube-api-access-s48t4\") pod \"55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf\" (UID: \"34b15f05-4416-4999-ba8c-3bc64ada7f04\") " pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" Feb 01 07:36:48 crc kubenswrapper[4835]: I0201 07:36:48.142568 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" Feb 01 07:36:48 crc kubenswrapper[4835]: I0201 07:36:48.357069 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-5w5sr" Feb 01 07:36:48 crc kubenswrapper[4835]: I0201 07:36:48.367877 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf"] Feb 01 07:36:48 crc kubenswrapper[4835]: W0201 07:36:48.378343 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34b15f05_4416_4999_ba8c_3bc64ada7f04.slice/crio-c0df356a392eca344c42898289354373cc8f005ff577b910ebcd701d4598b57a WatchSource:0}: Error finding container c0df356a392eca344c42898289354373cc8f005ff577b910ebcd701d4598b57a: Status 404 returned error can't find the container with id c0df356a392eca344c42898289354373cc8f005ff577b910ebcd701d4598b57a Feb 01 07:36:48 crc kubenswrapper[4835]: I0201 07:36:48.468532 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd1d09a3-13ff-43c0-835a-de9a6f9b5103-config-data\") pod \"cd1d09a3-13ff-43c0-835a-de9a6f9b5103\" (UID: \"cd1d09a3-13ff-43c0-835a-de9a6f9b5103\") " Feb 01 07:36:48 crc kubenswrapper[4835]: I0201 07:36:48.468606 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89qxh\" (UniqueName: \"kubernetes.io/projected/cd1d09a3-13ff-43c0-835a-de9a6f9b5103-kube-api-access-89qxh\") pod \"cd1d09a3-13ff-43c0-835a-de9a6f9b5103\" (UID: \"cd1d09a3-13ff-43c0-835a-de9a6f9b5103\") " Feb 01 07:36:48 crc kubenswrapper[4835]: I0201 07:36:48.475250 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd1d09a3-13ff-43c0-835a-de9a6f9b5103-kube-api-access-89qxh" (OuterVolumeSpecName: "kube-api-access-89qxh") pod "cd1d09a3-13ff-43c0-835a-de9a6f9b5103" (UID: "cd1d09a3-13ff-43c0-835a-de9a6f9b5103"). InnerVolumeSpecName "kube-api-access-89qxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:36:48 crc kubenswrapper[4835]: I0201 07:36:48.507101 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd1d09a3-13ff-43c0-835a-de9a6f9b5103-config-data" (OuterVolumeSpecName: "config-data") pod "cd1d09a3-13ff-43c0-835a-de9a6f9b5103" (UID: "cd1d09a3-13ff-43c0-835a-de9a6f9b5103"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:36:48 crc kubenswrapper[4835]: I0201 07:36:48.570513 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd1d09a3-13ff-43c0-835a-de9a6f9b5103-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:48 crc kubenswrapper[4835]: I0201 07:36:48.570580 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89qxh\" (UniqueName: \"kubernetes.io/projected/cd1d09a3-13ff-43c0-835a-de9a6f9b5103-kube-api-access-89qxh\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.037586 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-5w5sr" event={"ID":"cd1d09a3-13ff-43c0-835a-de9a6f9b5103","Type":"ContainerDied","Data":"bb07ccc83c8b95c7749afa42417dd2f772f0a6c5857837894045b51d53900cfe"} Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.037611 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-5w5sr" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.038225 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb07ccc83c8b95c7749afa42417dd2f772f0a6c5857837894045b51d53900cfe" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.040259 4835 generic.go:334] "Generic (PLEG): container finished" podID="34b15f05-4416-4999-ba8c-3bc64ada7f04" containerID="a758f80f79a264f26eed6f223a42becc5edd1638586fb21bac9054e3130e751b" exitCode=0 Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.040326 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" event={"ID":"34b15f05-4416-4999-ba8c-3bc64ada7f04","Type":"ContainerDied","Data":"a758f80f79a264f26eed6f223a42becc5edd1638586fb21bac9054e3130e751b"} Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.040370 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" event={"ID":"34b15f05-4416-4999-ba8c-3bc64ada7f04","Type":"ContainerStarted","Data":"c0df356a392eca344c42898289354373cc8f005ff577b910ebcd701d4598b57a"} Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.228934 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-6pjmn"] Feb 01 07:36:49 crc kubenswrapper[4835]: E0201 07:36:49.229320 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd1d09a3-13ff-43c0-835a-de9a6f9b5103" containerName="keystone-db-sync" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.229341 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd1d09a3-13ff-43c0-835a-de9a6f9b5103" containerName="keystone-db-sync" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.229549 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd1d09a3-13ff-43c0-835a-de9a6f9b5103" containerName="keystone-db-sync" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.230277 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.234065 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-config-data" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.235121 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"osp-secret" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.235526 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.237569 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-keystone-dockercfg-hgb5p" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.237954 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-scripts" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.257913 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-6pjmn"] Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.383136 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbt25\" (UniqueName: \"kubernetes.io/projected/bf026661-c9af-420a-8984-f7fbe212e592-kube-api-access-xbt25\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.383279 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-credential-keys\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.383340 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-config-data\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.383522 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-fernet-keys\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.383617 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-scripts\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.484811 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbt25\" (UniqueName: \"kubernetes.io/projected/bf026661-c9af-420a-8984-f7fbe212e592-kube-api-access-xbt25\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.484931 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-credential-keys\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.485004 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-config-data\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.485102 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-fernet-keys\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.485186 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-scripts\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.490960 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-credential-keys\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.491370 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-config-data\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.491925 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-scripts\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.492209 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-fernet-keys\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.509766 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbt25\" (UniqueName: \"kubernetes.io/projected/bf026661-c9af-420a-8984-f7fbe212e592-kube-api-access-xbt25\") pod \"keystone-bootstrap-6pjmn\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.557276 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:49 crc kubenswrapper[4835]: I0201 07:36:49.816902 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-6pjmn"] Feb 01 07:36:50 crc kubenswrapper[4835]: I0201 07:36:50.048517 4835 generic.go:334] "Generic (PLEG): container finished" podID="34b15f05-4416-4999-ba8c-3bc64ada7f04" containerID="5ab4a566f333981e56101c0b8a532c167e6d02046e37b70ae7cd86f9e5074387" exitCode=0 Feb 01 07:36:50 crc kubenswrapper[4835]: I0201 07:36:50.048640 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" event={"ID":"34b15f05-4416-4999-ba8c-3bc64ada7f04","Type":"ContainerDied","Data":"5ab4a566f333981e56101c0b8a532c167e6d02046e37b70ae7cd86f9e5074387"} Feb 01 07:36:50 crc kubenswrapper[4835]: I0201 07:36:50.052142 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" event={"ID":"bf026661-c9af-420a-8984-f7fbe212e592","Type":"ContainerStarted","Data":"eabeabeae4f73ee57a400f521880f710c03aa93decaac629af5189bf021874a3"} Feb 01 07:36:50 crc kubenswrapper[4835]: I0201 07:36:50.052261 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" event={"ID":"bf026661-c9af-420a-8984-f7fbe212e592","Type":"ContainerStarted","Data":"f9a81db13b96f0df74cca3b4f709858369386ac419f8b4c76dd40e96cb1e2a57"} Feb 01 07:36:51 crc kubenswrapper[4835]: I0201 07:36:51.071499 4835 generic.go:334] "Generic (PLEG): container finished" podID="34b15f05-4416-4999-ba8c-3bc64ada7f04" containerID="050f1d779236f9d40b785b72ed4086ba32ddc3b81a4f58145ebfbbebb1134455" exitCode=0 Feb 01 07:36:51 crc kubenswrapper[4835]: I0201 07:36:51.072671 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" event={"ID":"34b15f05-4416-4999-ba8c-3bc64ada7f04","Type":"ContainerDied","Data":"050f1d779236f9d40b785b72ed4086ba32ddc3b81a4f58145ebfbbebb1134455"} Feb 01 07:36:51 crc kubenswrapper[4835]: I0201 07:36:51.107359 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" podStartSLOduration=2.107335 podStartE2EDuration="2.107335s" podCreationTimestamp="2026-02-01 07:36:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:36:50.096165827 +0000 UTC m=+883.216602261" watchObservedRunningTime="2026-02-01 07:36:51.107335 +0000 UTC m=+884.227771474" Feb 01 07:36:52 crc kubenswrapper[4835]: I0201 07:36:52.407742 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" Feb 01 07:36:52 crc kubenswrapper[4835]: I0201 07:36:52.433124 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34b15f05-4416-4999-ba8c-3bc64ada7f04-bundle\") pod \"34b15f05-4416-4999-ba8c-3bc64ada7f04\" (UID: \"34b15f05-4416-4999-ba8c-3bc64ada7f04\") " Feb 01 07:36:52 crc kubenswrapper[4835]: I0201 07:36:52.433293 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34b15f05-4416-4999-ba8c-3bc64ada7f04-util\") pod \"34b15f05-4416-4999-ba8c-3bc64ada7f04\" (UID: \"34b15f05-4416-4999-ba8c-3bc64ada7f04\") " Feb 01 07:36:52 crc kubenswrapper[4835]: I0201 07:36:52.433352 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s48t4\" (UniqueName: \"kubernetes.io/projected/34b15f05-4416-4999-ba8c-3bc64ada7f04-kube-api-access-s48t4\") pod \"34b15f05-4416-4999-ba8c-3bc64ada7f04\" (UID: \"34b15f05-4416-4999-ba8c-3bc64ada7f04\") " Feb 01 07:36:52 crc kubenswrapper[4835]: I0201 07:36:52.435375 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34b15f05-4416-4999-ba8c-3bc64ada7f04-bundle" (OuterVolumeSpecName: "bundle") pod "34b15f05-4416-4999-ba8c-3bc64ada7f04" (UID: "34b15f05-4416-4999-ba8c-3bc64ada7f04"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:36:52 crc kubenswrapper[4835]: I0201 07:36:52.442593 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34b15f05-4416-4999-ba8c-3bc64ada7f04-kube-api-access-s48t4" (OuterVolumeSpecName: "kube-api-access-s48t4") pod "34b15f05-4416-4999-ba8c-3bc64ada7f04" (UID: "34b15f05-4416-4999-ba8c-3bc64ada7f04"). InnerVolumeSpecName "kube-api-access-s48t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:36:52 crc kubenswrapper[4835]: I0201 07:36:52.452609 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34b15f05-4416-4999-ba8c-3bc64ada7f04-util" (OuterVolumeSpecName: "util") pod "34b15f05-4416-4999-ba8c-3bc64ada7f04" (UID: "34b15f05-4416-4999-ba8c-3bc64ada7f04"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:36:52 crc kubenswrapper[4835]: I0201 07:36:52.535212 4835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34b15f05-4416-4999-ba8c-3bc64ada7f04-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:52 crc kubenswrapper[4835]: I0201 07:36:52.535251 4835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34b15f05-4416-4999-ba8c-3bc64ada7f04-util\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:52 crc kubenswrapper[4835]: I0201 07:36:52.535281 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s48t4\" (UniqueName: \"kubernetes.io/projected/34b15f05-4416-4999-ba8c-3bc64ada7f04-kube-api-access-s48t4\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:53 crc kubenswrapper[4835]: I0201 07:36:53.093150 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" event={"ID":"34b15f05-4416-4999-ba8c-3bc64ada7f04","Type":"ContainerDied","Data":"c0df356a392eca344c42898289354373cc8f005ff577b910ebcd701d4598b57a"} Feb 01 07:36:53 crc kubenswrapper[4835]: I0201 07:36:53.093213 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0df356a392eca344c42898289354373cc8f005ff577b910ebcd701d4598b57a" Feb 01 07:36:53 crc kubenswrapper[4835]: I0201 07:36:53.093163 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf" Feb 01 07:36:53 crc kubenswrapper[4835]: I0201 07:36:53.097825 4835 generic.go:334] "Generic (PLEG): container finished" podID="bf026661-c9af-420a-8984-f7fbe212e592" containerID="eabeabeae4f73ee57a400f521880f710c03aa93decaac629af5189bf021874a3" exitCode=0 Feb 01 07:36:53 crc kubenswrapper[4835]: I0201 07:36:53.097891 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" event={"ID":"bf026661-c9af-420a-8984-f7fbe212e592","Type":"ContainerDied","Data":"eabeabeae4f73ee57a400f521880f710c03aa93decaac629af5189bf021874a3"} Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.425701 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.466629 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-credential-keys\") pod \"bf026661-c9af-420a-8984-f7fbe212e592\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.467003 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-scripts\") pod \"bf026661-c9af-420a-8984-f7fbe212e592\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.467045 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-config-data\") pod \"bf026661-c9af-420a-8984-f7fbe212e592\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.467067 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-fernet-keys\") pod \"bf026661-c9af-420a-8984-f7fbe212e592\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.467140 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbt25\" (UniqueName: \"kubernetes.io/projected/bf026661-c9af-420a-8984-f7fbe212e592-kube-api-access-xbt25\") pod \"bf026661-c9af-420a-8984-f7fbe212e592\" (UID: \"bf026661-c9af-420a-8984-f7fbe212e592\") " Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.472756 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf026661-c9af-420a-8984-f7fbe212e592-kube-api-access-xbt25" (OuterVolumeSpecName: "kube-api-access-xbt25") pod "bf026661-c9af-420a-8984-f7fbe212e592" (UID: "bf026661-c9af-420a-8984-f7fbe212e592"). InnerVolumeSpecName "kube-api-access-xbt25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.473082 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "bf026661-c9af-420a-8984-f7fbe212e592" (UID: "bf026661-c9af-420a-8984-f7fbe212e592"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.474095 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bf026661-c9af-420a-8984-f7fbe212e592" (UID: "bf026661-c9af-420a-8984-f7fbe212e592"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.475620 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-scripts" (OuterVolumeSpecName: "scripts") pod "bf026661-c9af-420a-8984-f7fbe212e592" (UID: "bf026661-c9af-420a-8984-f7fbe212e592"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.495016 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-config-data" (OuterVolumeSpecName: "config-data") pod "bf026661-c9af-420a-8984-f7fbe212e592" (UID: "bf026661-c9af-420a-8984-f7fbe212e592"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.569734 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbt25\" (UniqueName: \"kubernetes.io/projected/bf026661-c9af-420a-8984-f7fbe212e592-kube-api-access-xbt25\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.569782 4835 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.569800 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.569819 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:54 crc kubenswrapper[4835]: I0201 07:36:54.569835 4835 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf026661-c9af-420a-8984-f7fbe212e592-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.117534 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" event={"ID":"bf026661-c9af-420a-8984-f7fbe212e592","Type":"ContainerDied","Data":"f9a81db13b96f0df74cca3b4f709858369386ac419f8b4c76dd40e96cb1e2a57"} Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.117591 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9a81db13b96f0df74cca3b4f709858369386ac419f8b4c76dd40e96cb1e2a57" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.117655 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-6pjmn" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.332461 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-95fb65664-fmplj"] Feb 01 07:36:55 crc kubenswrapper[4835]: E0201 07:36:55.332859 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b15f05-4416-4999-ba8c-3bc64ada7f04" containerName="pull" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.332892 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b15f05-4416-4999-ba8c-3bc64ada7f04" containerName="pull" Feb 01 07:36:55 crc kubenswrapper[4835]: E0201 07:36:55.332940 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b15f05-4416-4999-ba8c-3bc64ada7f04" containerName="extract" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.332954 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b15f05-4416-4999-ba8c-3bc64ada7f04" containerName="extract" Feb 01 07:36:55 crc kubenswrapper[4835]: E0201 07:36:55.332987 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b15f05-4416-4999-ba8c-3bc64ada7f04" containerName="util" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.333005 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b15f05-4416-4999-ba8c-3bc64ada7f04" containerName="util" Feb 01 07:36:55 crc kubenswrapper[4835]: E0201 07:36:55.333029 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf026661-c9af-420a-8984-f7fbe212e592" containerName="keystone-bootstrap" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.333042 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf026661-c9af-420a-8984-f7fbe212e592" containerName="keystone-bootstrap" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.333275 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b15f05-4416-4999-ba8c-3bc64ada7f04" containerName="extract" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.333313 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf026661-c9af-420a-8984-f7fbe212e592" containerName="keystone-bootstrap" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.334036 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.339238 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-scripts" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.339922 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.340240 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-config-data" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.340407 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-keystone-dockercfg-hgb5p" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.350440 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-95fb65664-fmplj"] Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.481040 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f218fc-86ce-4952-a7cd-4c80a7cfe774-config-data\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.481533 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/99f218fc-86ce-4952-a7cd-4c80a7cfe774-credential-keys\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.481602 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/99f218fc-86ce-4952-a7cd-4c80a7cfe774-fernet-keys\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.481695 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m7hb\" (UniqueName: \"kubernetes.io/projected/99f218fc-86ce-4952-a7cd-4c80a7cfe774-kube-api-access-9m7hb\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.481736 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99f218fc-86ce-4952-a7cd-4c80a7cfe774-scripts\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.583215 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f218fc-86ce-4952-a7cd-4c80a7cfe774-config-data\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.583878 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/99f218fc-86ce-4952-a7cd-4c80a7cfe774-credential-keys\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.584006 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/99f218fc-86ce-4952-a7cd-4c80a7cfe774-fernet-keys\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.584096 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m7hb\" (UniqueName: \"kubernetes.io/projected/99f218fc-86ce-4952-a7cd-4c80a7cfe774-kube-api-access-9m7hb\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.584133 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99f218fc-86ce-4952-a7cd-4c80a7cfe774-scripts\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.589129 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99f218fc-86ce-4952-a7cd-4c80a7cfe774-scripts\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.592692 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/99f218fc-86ce-4952-a7cd-4c80a7cfe774-fernet-keys\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.593093 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/99f218fc-86ce-4952-a7cd-4c80a7cfe774-credential-keys\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.593954 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99f218fc-86ce-4952-a7cd-4c80a7cfe774-config-data\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.627831 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m7hb\" (UniqueName: \"kubernetes.io/projected/99f218fc-86ce-4952-a7cd-4c80a7cfe774-kube-api-access-9m7hb\") pod \"keystone-95fb65664-fmplj\" (UID: \"99f218fc-86ce-4952-a7cd-4c80a7cfe774\") " pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:55 crc kubenswrapper[4835]: I0201 07:36:55.659055 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:56 crc kubenswrapper[4835]: I0201 07:36:56.163279 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-95fb65664-fmplj"] Feb 01 07:36:57 crc kubenswrapper[4835]: I0201 07:36:57.134254 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-95fb65664-fmplj" event={"ID":"99f218fc-86ce-4952-a7cd-4c80a7cfe774","Type":"ContainerStarted","Data":"690776ed1a952a39556bea2de8bcf6435198d5b3c2e1610fcabed2621cb7dc94"} Feb 01 07:36:57 crc kubenswrapper[4835]: I0201 07:36:57.134530 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-95fb65664-fmplj" event={"ID":"99f218fc-86ce-4952-a7cd-4c80a7cfe774","Type":"ContainerStarted","Data":"af968c38f8638debe9c415c87965a2e0d0d002fb31c35b345e8b1bec429487f8"} Feb 01 07:36:57 crc kubenswrapper[4835]: I0201 07:36:57.134548 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:36:57 crc kubenswrapper[4835]: I0201 07:36:57.177531 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/keystone-95fb65664-fmplj" podStartSLOduration=2.177512899 podStartE2EDuration="2.177512899s" podCreationTimestamp="2026-02-01 07:36:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:36:57.173471314 +0000 UTC m=+890.293907768" watchObservedRunningTime="2026-02-01 07:36:57.177512899 +0000 UTC m=+890.297949343" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.217973 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5"] Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.219142 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.221255 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-service-cert" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.226961 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-cfg6b" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.239761 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5"] Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.373723 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lgfx\" (UniqueName: \"kubernetes.io/projected/2562b9ca-8a8f-4a90-8e8f-fd3e4b235603-kube-api-access-5lgfx\") pod \"barbican-operator-controller-manager-854bb59648-nqzs5\" (UID: \"2562b9ca-8a8f-4a90-8e8f-fd3e4b235603\") " pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.373796 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2562b9ca-8a8f-4a90-8e8f-fd3e4b235603-webhook-cert\") pod \"barbican-operator-controller-manager-854bb59648-nqzs5\" (UID: \"2562b9ca-8a8f-4a90-8e8f-fd3e4b235603\") " pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.373958 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2562b9ca-8a8f-4a90-8e8f-fd3e4b235603-apiservice-cert\") pod \"barbican-operator-controller-manager-854bb59648-nqzs5\" (UID: \"2562b9ca-8a8f-4a90-8e8f-fd3e4b235603\") " pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.475511 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lgfx\" (UniqueName: \"kubernetes.io/projected/2562b9ca-8a8f-4a90-8e8f-fd3e4b235603-kube-api-access-5lgfx\") pod \"barbican-operator-controller-manager-854bb59648-nqzs5\" (UID: \"2562b9ca-8a8f-4a90-8e8f-fd3e4b235603\") " pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.475583 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2562b9ca-8a8f-4a90-8e8f-fd3e4b235603-webhook-cert\") pod \"barbican-operator-controller-manager-854bb59648-nqzs5\" (UID: \"2562b9ca-8a8f-4a90-8e8f-fd3e4b235603\") " pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.475619 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2562b9ca-8a8f-4a90-8e8f-fd3e4b235603-apiservice-cert\") pod \"barbican-operator-controller-manager-854bb59648-nqzs5\" (UID: \"2562b9ca-8a8f-4a90-8e8f-fd3e4b235603\") " pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.482032 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2562b9ca-8a8f-4a90-8e8f-fd3e4b235603-webhook-cert\") pod \"barbican-operator-controller-manager-854bb59648-nqzs5\" (UID: \"2562b9ca-8a8f-4a90-8e8f-fd3e4b235603\") " pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.486176 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2562b9ca-8a8f-4a90-8e8f-fd3e4b235603-apiservice-cert\") pod \"barbican-operator-controller-manager-854bb59648-nqzs5\" (UID: \"2562b9ca-8a8f-4a90-8e8f-fd3e4b235603\") " pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.495877 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lgfx\" (UniqueName: \"kubernetes.io/projected/2562b9ca-8a8f-4a90-8e8f-fd3e4b235603-kube-api-access-5lgfx\") pod \"barbican-operator-controller-manager-854bb59648-nqzs5\" (UID: \"2562b9ca-8a8f-4a90-8e8f-fd3e4b235603\") " pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.541465 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-cfg6b" Feb 01 07:37:07 crc kubenswrapper[4835]: I0201 07:37:07.550011 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" Feb 01 07:37:08 crc kubenswrapper[4835]: I0201 07:37:08.045024 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5"] Feb 01 07:37:08 crc kubenswrapper[4835]: W0201 07:37:08.060652 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2562b9ca_8a8f_4a90_8e8f_fd3e4b235603.slice/crio-64607037233e18f4e976bbc187db317c6ea483ff48896787cedb260a1d41a2ac WatchSource:0}: Error finding container 64607037233e18f4e976bbc187db317c6ea483ff48896787cedb260a1d41a2ac: Status 404 returned error can't find the container with id 64607037233e18f4e976bbc187db317c6ea483ff48896787cedb260a1d41a2ac Feb 01 07:37:08 crc kubenswrapper[4835]: I0201 07:37:08.217383 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" event={"ID":"2562b9ca-8a8f-4a90-8e8f-fd3e4b235603","Type":"ContainerStarted","Data":"64607037233e18f4e976bbc187db317c6ea483ff48896787cedb260a1d41a2ac"} Feb 01 07:37:10 crc kubenswrapper[4835]: I0201 07:37:10.233928 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" event={"ID":"2562b9ca-8a8f-4a90-8e8f-fd3e4b235603","Type":"ContainerStarted","Data":"7c6b9db5255affd468e01ac18d8fa746d09be373f766dfcd52f131bc3d21f610"} Feb 01 07:37:10 crc kubenswrapper[4835]: I0201 07:37:10.235327 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" Feb 01 07:37:10 crc kubenswrapper[4835]: I0201 07:37:10.254694 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" podStartSLOduration=1.361429188 podStartE2EDuration="3.254675647s" podCreationTimestamp="2026-02-01 07:37:07 +0000 UTC" firstStartedPulling="2026-02-01 07:37:08.062917916 +0000 UTC m=+901.183354350" lastFinishedPulling="2026-02-01 07:37:09.956164335 +0000 UTC m=+903.076600809" observedRunningTime="2026-02-01 07:37:10.251877224 +0000 UTC m=+903.372313658" watchObservedRunningTime="2026-02-01 07:37:10.254675647 +0000 UTC m=+903.375112101" Feb 01 07:37:14 crc kubenswrapper[4835]: I0201 07:37:14.525215 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vpzbj"] Feb 01 07:37:14 crc kubenswrapper[4835]: I0201 07:37:14.528139 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:14 crc kubenswrapper[4835]: I0201 07:37:14.552259 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpzbj"] Feb 01 07:37:14 crc kubenswrapper[4835]: I0201 07:37:14.593268 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srpqr\" (UniqueName: \"kubernetes.io/projected/619d1e1e-0c68-4844-86de-2e62153f4f43-kube-api-access-srpqr\") pod \"redhat-marketplace-vpzbj\" (UID: \"619d1e1e-0c68-4844-86de-2e62153f4f43\") " pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:14 crc kubenswrapper[4835]: I0201 07:37:14.593357 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/619d1e1e-0c68-4844-86de-2e62153f4f43-utilities\") pod \"redhat-marketplace-vpzbj\" (UID: \"619d1e1e-0c68-4844-86de-2e62153f4f43\") " pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:14 crc kubenswrapper[4835]: I0201 07:37:14.593390 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/619d1e1e-0c68-4844-86de-2e62153f4f43-catalog-content\") pod \"redhat-marketplace-vpzbj\" (UID: \"619d1e1e-0c68-4844-86de-2e62153f4f43\") " pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:14 crc kubenswrapper[4835]: I0201 07:37:14.697468 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srpqr\" (UniqueName: \"kubernetes.io/projected/619d1e1e-0c68-4844-86de-2e62153f4f43-kube-api-access-srpqr\") pod \"redhat-marketplace-vpzbj\" (UID: \"619d1e1e-0c68-4844-86de-2e62153f4f43\") " pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:14 crc kubenswrapper[4835]: I0201 07:37:14.697580 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/619d1e1e-0c68-4844-86de-2e62153f4f43-utilities\") pod \"redhat-marketplace-vpzbj\" (UID: \"619d1e1e-0c68-4844-86de-2e62153f4f43\") " pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:14 crc kubenswrapper[4835]: I0201 07:37:14.697620 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/619d1e1e-0c68-4844-86de-2e62153f4f43-catalog-content\") pod \"redhat-marketplace-vpzbj\" (UID: \"619d1e1e-0c68-4844-86de-2e62153f4f43\") " pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:14 crc kubenswrapper[4835]: I0201 07:37:14.698400 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/619d1e1e-0c68-4844-86de-2e62153f4f43-catalog-content\") pod \"redhat-marketplace-vpzbj\" (UID: \"619d1e1e-0c68-4844-86de-2e62153f4f43\") " pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:14 crc kubenswrapper[4835]: I0201 07:37:14.699335 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/619d1e1e-0c68-4844-86de-2e62153f4f43-utilities\") pod \"redhat-marketplace-vpzbj\" (UID: \"619d1e1e-0c68-4844-86de-2e62153f4f43\") " pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:14 crc kubenswrapper[4835]: I0201 07:37:14.732506 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srpqr\" (UniqueName: \"kubernetes.io/projected/619d1e1e-0c68-4844-86de-2e62153f4f43-kube-api-access-srpqr\") pod \"redhat-marketplace-vpzbj\" (UID: \"619d1e1e-0c68-4844-86de-2e62153f4f43\") " pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:14 crc kubenswrapper[4835]: I0201 07:37:14.867668 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:15 crc kubenswrapper[4835]: I0201 07:37:15.354963 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpzbj"] Feb 01 07:37:15 crc kubenswrapper[4835]: W0201 07:37:15.362466 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod619d1e1e_0c68_4844_86de_2e62153f4f43.slice/crio-1265b6bb64ba3f48c50d80f4e805869952e6bb5b9b995bee99530e9f77977489 WatchSource:0}: Error finding container 1265b6bb64ba3f48c50d80f4e805869952e6bb5b9b995bee99530e9f77977489: Status 404 returned error can't find the container with id 1265b6bb64ba3f48c50d80f4e805869952e6bb5b9b995bee99530e9f77977489 Feb 01 07:37:16 crc kubenswrapper[4835]: I0201 07:37:16.283287 4835 generic.go:334] "Generic (PLEG): container finished" podID="619d1e1e-0c68-4844-86de-2e62153f4f43" containerID="e7201f59379d2bead3c65bd0afefdb43d2476d1d60b92329b0df28725c4698f2" exitCode=0 Feb 01 07:37:16 crc kubenswrapper[4835]: I0201 07:37:16.283461 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpzbj" event={"ID":"619d1e1e-0c68-4844-86de-2e62153f4f43","Type":"ContainerDied","Data":"e7201f59379d2bead3c65bd0afefdb43d2476d1d60b92329b0df28725c4698f2"} Feb 01 07:37:16 crc kubenswrapper[4835]: I0201 07:37:16.284674 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpzbj" event={"ID":"619d1e1e-0c68-4844-86de-2e62153f4f43","Type":"ContainerStarted","Data":"1265b6bb64ba3f48c50d80f4e805869952e6bb5b9b995bee99530e9f77977489"} Feb 01 07:37:16 crc kubenswrapper[4835]: I0201 07:37:16.286232 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 01 07:37:17 crc kubenswrapper[4835]: I0201 07:37:17.297166 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpzbj" event={"ID":"619d1e1e-0c68-4844-86de-2e62153f4f43","Type":"ContainerStarted","Data":"9f2a1ab62add49cdff4bda274340a783a0c8dd6e31e40a7b2352055515842834"} Feb 01 07:37:17 crc kubenswrapper[4835]: I0201 07:37:17.557049 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-854bb59648-nqzs5" Feb 01 07:37:18 crc kubenswrapper[4835]: I0201 07:37:18.307586 4835 generic.go:334] "Generic (PLEG): container finished" podID="619d1e1e-0c68-4844-86de-2e62153f4f43" containerID="9f2a1ab62add49cdff4bda274340a783a0c8dd6e31e40a7b2352055515842834" exitCode=0 Feb 01 07:37:18 crc kubenswrapper[4835]: I0201 07:37:18.307651 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpzbj" event={"ID":"619d1e1e-0c68-4844-86de-2e62153f4f43","Type":"ContainerDied","Data":"9f2a1ab62add49cdff4bda274340a783a0c8dd6e31e40a7b2352055515842834"} Feb 01 07:37:19 crc kubenswrapper[4835]: I0201 07:37:19.317563 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpzbj" event={"ID":"619d1e1e-0c68-4844-86de-2e62153f4f43","Type":"ContainerStarted","Data":"53b1ebf6d0ef8776fb635aec1bcb95829748e55cb2ae9101cecf48766cb03ce7"} Feb 01 07:37:19 crc kubenswrapper[4835]: I0201 07:37:19.335739 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vpzbj" podStartSLOduration=2.880756722 podStartE2EDuration="5.335720116s" podCreationTimestamp="2026-02-01 07:37:14 +0000 UTC" firstStartedPulling="2026-02-01 07:37:16.285707999 +0000 UTC m=+909.406144473" lastFinishedPulling="2026-02-01 07:37:18.740671393 +0000 UTC m=+911.861107867" observedRunningTime="2026-02-01 07:37:19.333862918 +0000 UTC m=+912.454299362" watchObservedRunningTime="2026-02-01 07:37:19.335720116 +0000 UTC m=+912.456156560" Feb 01 07:37:24 crc kubenswrapper[4835]: I0201 07:37:24.868065 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:24 crc kubenswrapper[4835]: I0201 07:37:24.868644 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:24 crc kubenswrapper[4835]: I0201 07:37:24.925525 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.192449 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.192530 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.442183 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.455663 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-db-create-ddqhc"] Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.456486 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-ddqhc" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.465179 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26692abf-b5f8-4461-992d-508cb9b73bb2-operator-scripts\") pod \"barbican-db-create-ddqhc\" (UID: \"26692abf-b5f8-4461-992d-508cb9b73bb2\") " pod="swift-kuttl-tests/barbican-db-create-ddqhc" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.465253 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82hj7\" (UniqueName: \"kubernetes.io/projected/26692abf-b5f8-4461-992d-508cb9b73bb2-kube-api-access-82hj7\") pod \"barbican-db-create-ddqhc\" (UID: \"26692abf-b5f8-4461-992d-508cb9b73bb2\") " pod="swift-kuttl-tests/barbican-db-create-ddqhc" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.472040 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-create-ddqhc"] Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.538760 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w65gv"] Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.540279 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.553658 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w65gv"] Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.564807 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv"] Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.566175 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26692abf-b5f8-4461-992d-508cb9b73bb2-operator-scripts\") pod \"barbican-db-create-ddqhc\" (UID: \"26692abf-b5f8-4461-992d-508cb9b73bb2\") " pod="swift-kuttl-tests/barbican-db-create-ddqhc" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.566209 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82hj7\" (UniqueName: \"kubernetes.io/projected/26692abf-b5f8-4461-992d-508cb9b73bb2-kube-api-access-82hj7\") pod \"barbican-db-create-ddqhc\" (UID: \"26692abf-b5f8-4461-992d-508cb9b73bb2\") " pod="swift-kuttl-tests/barbican-db-create-ddqhc" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.566259 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24svr\" (UniqueName: \"kubernetes.io/projected/7f1e8788-786f-4f9d-b492-3a036764b28d-kube-api-access-24svr\") pod \"community-operators-w65gv\" (UID: \"7f1e8788-786f-4f9d-b492-3a036764b28d\") " pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.566289 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f1e8788-786f-4f9d-b492-3a036764b28d-catalog-content\") pod \"community-operators-w65gv\" (UID: \"7f1e8788-786f-4f9d-b492-3a036764b28d\") " pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.566311 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f1e8788-786f-4f9d-b492-3a036764b28d-utilities\") pod \"community-operators-w65gv\" (UID: \"7f1e8788-786f-4f9d-b492-3a036764b28d\") " pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.567611 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26692abf-b5f8-4461-992d-508cb9b73bb2-operator-scripts\") pod \"barbican-db-create-ddqhc\" (UID: \"26692abf-b5f8-4461-992d-508cb9b73bb2\") " pod="swift-kuttl-tests/barbican-db-create-ddqhc" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.568179 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.570801 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-db-secret" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.593015 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv"] Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.600085 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82hj7\" (UniqueName: \"kubernetes.io/projected/26692abf-b5f8-4461-992d-508cb9b73bb2-kube-api-access-82hj7\") pod \"barbican-db-create-ddqhc\" (UID: \"26692abf-b5f8-4461-992d-508cb9b73bb2\") " pod="swift-kuttl-tests/barbican-db-create-ddqhc" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.667028 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h55rt\" (UniqueName: \"kubernetes.io/projected/545f3a5d-c02e-45f2-aba5-ea50bf4fccd0-kube-api-access-h55rt\") pod \"barbican-2ff5-account-create-update-9hbgv\" (UID: \"545f3a5d-c02e-45f2-aba5-ea50bf4fccd0\") " pod="swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.667098 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/545f3a5d-c02e-45f2-aba5-ea50bf4fccd0-operator-scripts\") pod \"barbican-2ff5-account-create-update-9hbgv\" (UID: \"545f3a5d-c02e-45f2-aba5-ea50bf4fccd0\") " pod="swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.667319 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24svr\" (UniqueName: \"kubernetes.io/projected/7f1e8788-786f-4f9d-b492-3a036764b28d-kube-api-access-24svr\") pod \"community-operators-w65gv\" (UID: \"7f1e8788-786f-4f9d-b492-3a036764b28d\") " pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.667437 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f1e8788-786f-4f9d-b492-3a036764b28d-catalog-content\") pod \"community-operators-w65gv\" (UID: \"7f1e8788-786f-4f9d-b492-3a036764b28d\") " pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.667479 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f1e8788-786f-4f9d-b492-3a036764b28d-utilities\") pod \"community-operators-w65gv\" (UID: \"7f1e8788-786f-4f9d-b492-3a036764b28d\") " pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.667908 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f1e8788-786f-4f9d-b492-3a036764b28d-catalog-content\") pod \"community-operators-w65gv\" (UID: \"7f1e8788-786f-4f9d-b492-3a036764b28d\") " pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.667957 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f1e8788-786f-4f9d-b492-3a036764b28d-utilities\") pod \"community-operators-w65gv\" (UID: \"7f1e8788-786f-4f9d-b492-3a036764b28d\") " pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.700078 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24svr\" (UniqueName: \"kubernetes.io/projected/7f1e8788-786f-4f9d-b492-3a036764b28d-kube-api-access-24svr\") pod \"community-operators-w65gv\" (UID: \"7f1e8788-786f-4f9d-b492-3a036764b28d\") " pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.768333 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/545f3a5d-c02e-45f2-aba5-ea50bf4fccd0-operator-scripts\") pod \"barbican-2ff5-account-create-update-9hbgv\" (UID: \"545f3a5d-c02e-45f2-aba5-ea50bf4fccd0\") " pod="swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.768454 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h55rt\" (UniqueName: \"kubernetes.io/projected/545f3a5d-c02e-45f2-aba5-ea50bf4fccd0-kube-api-access-h55rt\") pod \"barbican-2ff5-account-create-update-9hbgv\" (UID: \"545f3a5d-c02e-45f2-aba5-ea50bf4fccd0\") " pod="swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.769029 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/545f3a5d-c02e-45f2-aba5-ea50bf4fccd0-operator-scripts\") pod \"barbican-2ff5-account-create-update-9hbgv\" (UID: \"545f3a5d-c02e-45f2-aba5-ea50bf4fccd0\") " pod="swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.784999 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h55rt\" (UniqueName: \"kubernetes.io/projected/545f3a5d-c02e-45f2-aba5-ea50bf4fccd0-kube-api-access-h55rt\") pod \"barbican-2ff5-account-create-update-9hbgv\" (UID: \"545f3a5d-c02e-45f2-aba5-ea50bf4fccd0\") " pod="swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.831042 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-ddqhc" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.852331 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:25 crc kubenswrapper[4835]: I0201 07:37:25.884886 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv" Feb 01 07:37:26 crc kubenswrapper[4835]: I0201 07:37:26.284755 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w65gv"] Feb 01 07:37:26 crc kubenswrapper[4835]: I0201 07:37:26.314418 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv"] Feb 01 07:37:26 crc kubenswrapper[4835]: I0201 07:37:26.371514 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-create-ddqhc"] Feb 01 07:37:26 crc kubenswrapper[4835]: I0201 07:37:26.398722 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv" event={"ID":"545f3a5d-c02e-45f2-aba5-ea50bf4fccd0","Type":"ContainerStarted","Data":"5fd895ce994f67bbc723f4f973be658c7771a96994f15c9d7d69a9d632d3cac3"} Feb 01 07:37:26 crc kubenswrapper[4835]: I0201 07:37:26.405758 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w65gv" event={"ID":"7f1e8788-786f-4f9d-b492-3a036764b28d","Type":"ContainerStarted","Data":"f87ab5c4674c0034adb701c72c950d4ef6c4f3fd07b22b504ababb012b02a61e"} Feb 01 07:37:27 crc kubenswrapper[4835]: I0201 07:37:27.041317 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/keystone-95fb65664-fmplj" Feb 01 07:37:27 crc kubenswrapper[4835]: I0201 07:37:27.418561 4835 generic.go:334] "Generic (PLEG): container finished" podID="545f3a5d-c02e-45f2-aba5-ea50bf4fccd0" containerID="212958e93fcbd8f3fdf3afad7d233490e91ef9f2cf2380e3ac58f8cc1722a0b6" exitCode=0 Feb 01 07:37:27 crc kubenswrapper[4835]: I0201 07:37:27.418776 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv" event={"ID":"545f3a5d-c02e-45f2-aba5-ea50bf4fccd0","Type":"ContainerDied","Data":"212958e93fcbd8f3fdf3afad7d233490e91ef9f2cf2380e3ac58f8cc1722a0b6"} Feb 01 07:37:27 crc kubenswrapper[4835]: I0201 07:37:27.421475 4835 generic.go:334] "Generic (PLEG): container finished" podID="7f1e8788-786f-4f9d-b492-3a036764b28d" containerID="49b373fec160f5cd6ed7a7b91abccb255e85b7dd2f70bd40f149249b995f3798" exitCode=0 Feb 01 07:37:27 crc kubenswrapper[4835]: I0201 07:37:27.421565 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w65gv" event={"ID":"7f1e8788-786f-4f9d-b492-3a036764b28d","Type":"ContainerDied","Data":"49b373fec160f5cd6ed7a7b91abccb255e85b7dd2f70bd40f149249b995f3798"} Feb 01 07:37:27 crc kubenswrapper[4835]: I0201 07:37:27.425864 4835 generic.go:334] "Generic (PLEG): container finished" podID="26692abf-b5f8-4461-992d-508cb9b73bb2" containerID="65cf85b1dd72d5635988e485f041129154e6406263a9f9918622bbd9bb651c81" exitCode=0 Feb 01 07:37:27 crc kubenswrapper[4835]: I0201 07:37:27.425909 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-create-ddqhc" event={"ID":"26692abf-b5f8-4461-992d-508cb9b73bb2","Type":"ContainerDied","Data":"65cf85b1dd72d5635988e485f041129154e6406263a9f9918622bbd9bb651c81"} Feb 01 07:37:27 crc kubenswrapper[4835]: I0201 07:37:27.425928 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-create-ddqhc" event={"ID":"26692abf-b5f8-4461-992d-508cb9b73bb2","Type":"ContainerStarted","Data":"d01e4179e0c583283545d6ed590e773396c25e1c03cb1cefcbe0609190b9a7b4"} Feb 01 07:37:28 crc kubenswrapper[4835]: I0201 07:37:28.880324 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv" Feb 01 07:37:28 crc kubenswrapper[4835]: I0201 07:37:28.883500 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-ddqhc" Feb 01 07:37:28 crc kubenswrapper[4835]: I0201 07:37:28.920629 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/545f3a5d-c02e-45f2-aba5-ea50bf4fccd0-operator-scripts\") pod \"545f3a5d-c02e-45f2-aba5-ea50bf4fccd0\" (UID: \"545f3a5d-c02e-45f2-aba5-ea50bf4fccd0\") " Feb 01 07:37:28 crc kubenswrapper[4835]: I0201 07:37:28.920727 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26692abf-b5f8-4461-992d-508cb9b73bb2-operator-scripts\") pod \"26692abf-b5f8-4461-992d-508cb9b73bb2\" (UID: \"26692abf-b5f8-4461-992d-508cb9b73bb2\") " Feb 01 07:37:28 crc kubenswrapper[4835]: I0201 07:37:28.920760 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82hj7\" (UniqueName: \"kubernetes.io/projected/26692abf-b5f8-4461-992d-508cb9b73bb2-kube-api-access-82hj7\") pod \"26692abf-b5f8-4461-992d-508cb9b73bb2\" (UID: \"26692abf-b5f8-4461-992d-508cb9b73bb2\") " Feb 01 07:37:28 crc kubenswrapper[4835]: I0201 07:37:28.920812 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h55rt\" (UniqueName: \"kubernetes.io/projected/545f3a5d-c02e-45f2-aba5-ea50bf4fccd0-kube-api-access-h55rt\") pod \"545f3a5d-c02e-45f2-aba5-ea50bf4fccd0\" (UID: \"545f3a5d-c02e-45f2-aba5-ea50bf4fccd0\") " Feb 01 07:37:28 crc kubenswrapper[4835]: I0201 07:37:28.922066 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26692abf-b5f8-4461-992d-508cb9b73bb2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26692abf-b5f8-4461-992d-508cb9b73bb2" (UID: "26692abf-b5f8-4461-992d-508cb9b73bb2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:37:28 crc kubenswrapper[4835]: I0201 07:37:28.922176 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/545f3a5d-c02e-45f2-aba5-ea50bf4fccd0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "545f3a5d-c02e-45f2-aba5-ea50bf4fccd0" (UID: "545f3a5d-c02e-45f2-aba5-ea50bf4fccd0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:37:28 crc kubenswrapper[4835]: I0201 07:37:28.928039 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/545f3a5d-c02e-45f2-aba5-ea50bf4fccd0-kube-api-access-h55rt" (OuterVolumeSpecName: "kube-api-access-h55rt") pod "545f3a5d-c02e-45f2-aba5-ea50bf4fccd0" (UID: "545f3a5d-c02e-45f2-aba5-ea50bf4fccd0"). InnerVolumeSpecName "kube-api-access-h55rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:37:28 crc kubenswrapper[4835]: I0201 07:37:28.928625 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26692abf-b5f8-4461-992d-508cb9b73bb2-kube-api-access-82hj7" (OuterVolumeSpecName: "kube-api-access-82hj7") pod "26692abf-b5f8-4461-992d-508cb9b73bb2" (UID: "26692abf-b5f8-4461-992d-508cb9b73bb2"). InnerVolumeSpecName "kube-api-access-82hj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.024036 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h55rt\" (UniqueName: \"kubernetes.io/projected/545f3a5d-c02e-45f2-aba5-ea50bf4fccd0-kube-api-access-h55rt\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.024071 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/545f3a5d-c02e-45f2-aba5-ea50bf4fccd0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.024080 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26692abf-b5f8-4461-992d-508cb9b73bb2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.024089 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82hj7\" (UniqueName: \"kubernetes.io/projected/26692abf-b5f8-4461-992d-508cb9b73bb2-kube-api-access-82hj7\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.449459 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv" event={"ID":"545f3a5d-c02e-45f2-aba5-ea50bf4fccd0","Type":"ContainerDied","Data":"5fd895ce994f67bbc723f4f973be658c7771a96994f15c9d7d69a9d632d3cac3"} Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.449512 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fd895ce994f67bbc723f4f973be658c7771a96994f15c9d7d69a9d632d3cac3" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.449474 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.451457 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-create-ddqhc" event={"ID":"26692abf-b5f8-4461-992d-508cb9b73bb2","Type":"ContainerDied","Data":"d01e4179e0c583283545d6ed590e773396c25e1c03cb1cefcbe0609190b9a7b4"} Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.451496 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d01e4179e0c583283545d6ed590e773396c25e1c03cb1cefcbe0609190b9a7b4" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.451545 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-ddqhc" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.519012 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-index-tj2nn"] Feb 01 07:37:29 crc kubenswrapper[4835]: E0201 07:37:29.519326 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26692abf-b5f8-4461-992d-508cb9b73bb2" containerName="mariadb-database-create" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.519343 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="26692abf-b5f8-4461-992d-508cb9b73bb2" containerName="mariadb-database-create" Feb 01 07:37:29 crc kubenswrapper[4835]: E0201 07:37:29.519364 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="545f3a5d-c02e-45f2-aba5-ea50bf4fccd0" containerName="mariadb-account-create-update" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.519371 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="545f3a5d-c02e-45f2-aba5-ea50bf4fccd0" containerName="mariadb-account-create-update" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.519551 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="545f3a5d-c02e-45f2-aba5-ea50bf4fccd0" containerName="mariadb-account-create-update" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.519566 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="26692abf-b5f8-4461-992d-508cb9b73bb2" containerName="mariadb-database-create" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.520152 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-tj2nn" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.524982 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-index-dockercfg-j5f24" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.531341 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm2qx\" (UniqueName: \"kubernetes.io/projected/ebf9c948-3fde-47f0-aa35-856193c1a275-kube-api-access-hm2qx\") pod \"swift-operator-index-tj2nn\" (UID: \"ebf9c948-3fde-47f0-aa35-856193c1a275\") " pod="openstack-operators/swift-operator-index-tj2nn" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.549941 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-tj2nn"] Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.632838 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm2qx\" (UniqueName: \"kubernetes.io/projected/ebf9c948-3fde-47f0-aa35-856193c1a275-kube-api-access-hm2qx\") pod \"swift-operator-index-tj2nn\" (UID: \"ebf9c948-3fde-47f0-aa35-856193c1a275\") " pod="openstack-operators/swift-operator-index-tj2nn" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.651814 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm2qx\" (UniqueName: \"kubernetes.io/projected/ebf9c948-3fde-47f0-aa35-856193c1a275-kube-api-access-hm2qx\") pod \"swift-operator-index-tj2nn\" (UID: \"ebf9c948-3fde-47f0-aa35-856193c1a275\") " pod="openstack-operators/swift-operator-index-tj2nn" Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.708305 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpzbj"] Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.714346 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vpzbj" podUID="619d1e1e-0c68-4844-86de-2e62153f4f43" containerName="registry-server" containerID="cri-o://53b1ebf6d0ef8776fb635aec1bcb95829748e55cb2ae9101cecf48766cb03ce7" gracePeriod=2 Feb 01 07:37:29 crc kubenswrapper[4835]: I0201 07:37:29.853903 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-tj2nn" Feb 01 07:37:30 crc kubenswrapper[4835]: I0201 07:37:30.461908 4835 generic.go:334] "Generic (PLEG): container finished" podID="619d1e1e-0c68-4844-86de-2e62153f4f43" containerID="53b1ebf6d0ef8776fb635aec1bcb95829748e55cb2ae9101cecf48766cb03ce7" exitCode=0 Feb 01 07:37:30 crc kubenswrapper[4835]: I0201 07:37:30.461956 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpzbj" event={"ID":"619d1e1e-0c68-4844-86de-2e62153f4f43","Type":"ContainerDied","Data":"53b1ebf6d0ef8776fb635aec1bcb95829748e55cb2ae9101cecf48766cb03ce7"} Feb 01 07:37:30 crc kubenswrapper[4835]: I0201 07:37:30.899846 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-db-sync-ll8z7"] Feb 01 07:37:30 crc kubenswrapper[4835]: I0201 07:37:30.901805 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-ll8z7" Feb 01 07:37:30 crc kubenswrapper[4835]: I0201 07:37:30.904877 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-barbican-dockercfg-jfvt4" Feb 01 07:37:30 crc kubenswrapper[4835]: I0201 07:37:30.907087 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-config-data" Feb 01 07:37:30 crc kubenswrapper[4835]: I0201 07:37:30.914058 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-sync-ll8z7"] Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.064481 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fffb\" (UniqueName: \"kubernetes.io/projected/b13e8606-6ec5-4e1b-a3fd-30f8eac5809a-kube-api-access-2fffb\") pod \"barbican-db-sync-ll8z7\" (UID: \"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a\") " pod="swift-kuttl-tests/barbican-db-sync-ll8z7" Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.064547 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b13e8606-6ec5-4e1b-a3fd-30f8eac5809a-db-sync-config-data\") pod \"barbican-db-sync-ll8z7\" (UID: \"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a\") " pod="swift-kuttl-tests/barbican-db-sync-ll8z7" Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.165593 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b13e8606-6ec5-4e1b-a3fd-30f8eac5809a-db-sync-config-data\") pod \"barbican-db-sync-ll8z7\" (UID: \"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a\") " pod="swift-kuttl-tests/barbican-db-sync-ll8z7" Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.165771 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fffb\" (UniqueName: \"kubernetes.io/projected/b13e8606-6ec5-4e1b-a3fd-30f8eac5809a-kube-api-access-2fffb\") pod \"barbican-db-sync-ll8z7\" (UID: \"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a\") " pod="swift-kuttl-tests/barbican-db-sync-ll8z7" Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.173674 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b13e8606-6ec5-4e1b-a3fd-30f8eac5809a-db-sync-config-data\") pod \"barbican-db-sync-ll8z7\" (UID: \"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a\") " pod="swift-kuttl-tests/barbican-db-sync-ll8z7" Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.181860 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fffb\" (UniqueName: \"kubernetes.io/projected/b13e8606-6ec5-4e1b-a3fd-30f8eac5809a-kube-api-access-2fffb\") pod \"barbican-db-sync-ll8z7\" (UID: \"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a\") " pod="swift-kuttl-tests/barbican-db-sync-ll8z7" Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.221610 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-ll8z7" Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.757975 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.875347 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/619d1e1e-0c68-4844-86de-2e62153f4f43-utilities\") pod \"619d1e1e-0c68-4844-86de-2e62153f4f43\" (UID: \"619d1e1e-0c68-4844-86de-2e62153f4f43\") " Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.875763 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/619d1e1e-0c68-4844-86de-2e62153f4f43-catalog-content\") pod \"619d1e1e-0c68-4844-86de-2e62153f4f43\" (UID: \"619d1e1e-0c68-4844-86de-2e62153f4f43\") " Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.875804 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srpqr\" (UniqueName: \"kubernetes.io/projected/619d1e1e-0c68-4844-86de-2e62153f4f43-kube-api-access-srpqr\") pod \"619d1e1e-0c68-4844-86de-2e62153f4f43\" (UID: \"619d1e1e-0c68-4844-86de-2e62153f4f43\") " Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.876010 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/619d1e1e-0c68-4844-86de-2e62153f4f43-utilities" (OuterVolumeSpecName: "utilities") pod "619d1e1e-0c68-4844-86de-2e62153f4f43" (UID: "619d1e1e-0c68-4844-86de-2e62153f4f43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.876199 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/619d1e1e-0c68-4844-86de-2e62153f4f43-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.881141 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/619d1e1e-0c68-4844-86de-2e62153f4f43-kube-api-access-srpqr" (OuterVolumeSpecName: "kube-api-access-srpqr") pod "619d1e1e-0c68-4844-86de-2e62153f4f43" (UID: "619d1e1e-0c68-4844-86de-2e62153f4f43"). InnerVolumeSpecName "kube-api-access-srpqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.900983 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/619d1e1e-0c68-4844-86de-2e62153f4f43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "619d1e1e-0c68-4844-86de-2e62153f4f43" (UID: "619d1e1e-0c68-4844-86de-2e62153f4f43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.978180 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/619d1e1e-0c68-4844-86de-2e62153f4f43-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:31 crc kubenswrapper[4835]: I0201 07:37:31.978210 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srpqr\" (UniqueName: \"kubernetes.io/projected/619d1e1e-0c68-4844-86de-2e62153f4f43-kube-api-access-srpqr\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:32 crc kubenswrapper[4835]: I0201 07:37:32.010206 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-sync-ll8z7"] Feb 01 07:37:32 crc kubenswrapper[4835]: W0201 07:37:32.016786 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb13e8606_6ec5_4e1b_a3fd_30f8eac5809a.slice/crio-9519543add700251272f3dd89a59c596cde81a6a29a8642f40d681e2fccdc8e6 WatchSource:0}: Error finding container 9519543add700251272f3dd89a59c596cde81a6a29a8642f40d681e2fccdc8e6: Status 404 returned error can't find the container with id 9519543add700251272f3dd89a59c596cde81a6a29a8642f40d681e2fccdc8e6 Feb 01 07:37:32 crc kubenswrapper[4835]: I0201 07:37:32.291230 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-tj2nn"] Feb 01 07:37:32 crc kubenswrapper[4835]: I0201 07:37:32.477259 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-ll8z7" event={"ID":"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a","Type":"ContainerStarted","Data":"9519543add700251272f3dd89a59c596cde81a6a29a8642f40d681e2fccdc8e6"} Feb 01 07:37:32 crc kubenswrapper[4835]: I0201 07:37:32.479710 4835 generic.go:334] "Generic (PLEG): container finished" podID="7f1e8788-786f-4f9d-b492-3a036764b28d" containerID="f2587070d1a982cc5d125873eda276b59552d3d086a9a0ac397df794ea67afbb" exitCode=0 Feb 01 07:37:32 crc kubenswrapper[4835]: I0201 07:37:32.479828 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w65gv" event={"ID":"7f1e8788-786f-4f9d-b492-3a036764b28d","Type":"ContainerDied","Data":"f2587070d1a982cc5d125873eda276b59552d3d086a9a0ac397df794ea67afbb"} Feb 01 07:37:32 crc kubenswrapper[4835]: I0201 07:37:32.484085 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpzbj" event={"ID":"619d1e1e-0c68-4844-86de-2e62153f4f43","Type":"ContainerDied","Data":"1265b6bb64ba3f48c50d80f4e805869952e6bb5b9b995bee99530e9f77977489"} Feb 01 07:37:32 crc kubenswrapper[4835]: I0201 07:37:32.484172 4835 scope.go:117] "RemoveContainer" containerID="53b1ebf6d0ef8776fb635aec1bcb95829748e55cb2ae9101cecf48766cb03ce7" Feb 01 07:37:32 crc kubenswrapper[4835]: I0201 07:37:32.484237 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpzbj" Feb 01 07:37:32 crc kubenswrapper[4835]: I0201 07:37:32.493667 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-tj2nn" event={"ID":"ebf9c948-3fde-47f0-aa35-856193c1a275","Type":"ContainerStarted","Data":"e03bb36533b8d4d27c58c7e06075e235155cbf7c1c853020cf017b528786cd03"} Feb 01 07:37:32 crc kubenswrapper[4835]: I0201 07:37:32.506039 4835 scope.go:117] "RemoveContainer" containerID="9f2a1ab62add49cdff4bda274340a783a0c8dd6e31e40a7b2352055515842834" Feb 01 07:37:32 crc kubenswrapper[4835]: I0201 07:37:32.539725 4835 scope.go:117] "RemoveContainer" containerID="e7201f59379d2bead3c65bd0afefdb43d2476d1d60b92329b0df28725c4698f2" Feb 01 07:37:32 crc kubenswrapper[4835]: I0201 07:37:32.544782 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpzbj"] Feb 01 07:37:32 crc kubenswrapper[4835]: I0201 07:37:32.548725 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpzbj"] Feb 01 07:37:33 crc kubenswrapper[4835]: I0201 07:37:33.583392 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="619d1e1e-0c68-4844-86de-2e62153f4f43" path="/var/lib/kubelet/pods/619d1e1e-0c68-4844-86de-2e62153f4f43/volumes" Feb 01 07:37:38 crc kubenswrapper[4835]: I0201 07:37:38.547032 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w65gv" event={"ID":"7f1e8788-786f-4f9d-b492-3a036764b28d","Type":"ContainerStarted","Data":"b778322d63fc3addc10376802f0efc0ab9a182e92c0872cc9682ddb7c5728a45"} Feb 01 07:37:38 crc kubenswrapper[4835]: I0201 07:37:38.549246 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-tj2nn" event={"ID":"ebf9c948-3fde-47f0-aa35-856193c1a275","Type":"ContainerStarted","Data":"e4867ce2d606303d7b7174df4fede4e9d40b112eacbbd0776384f2c027a9d972"} Feb 01 07:37:38 crc kubenswrapper[4835]: I0201 07:37:38.551437 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-ll8z7" event={"ID":"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a","Type":"ContainerStarted","Data":"2b8ab5a3d71979bd71932b8afef7987524df6361e18ab704eace9a5d232c62ee"} Feb 01 07:37:38 crc kubenswrapper[4835]: I0201 07:37:38.578972 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w65gv" podStartSLOduration=7.027833842 podStartE2EDuration="13.578952676s" podCreationTimestamp="2026-02-01 07:37:25 +0000 UTC" firstStartedPulling="2026-02-01 07:37:27.424285689 +0000 UTC m=+920.544722153" lastFinishedPulling="2026-02-01 07:37:33.975404553 +0000 UTC m=+927.095840987" observedRunningTime="2026-02-01 07:37:38.574942142 +0000 UTC m=+931.695378586" watchObservedRunningTime="2026-02-01 07:37:38.578952676 +0000 UTC m=+931.699389110" Feb 01 07:37:38 crc kubenswrapper[4835]: I0201 07:37:38.596881 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-db-sync-ll8z7" podStartSLOduration=2.978741166 podStartE2EDuration="8.596853621s" podCreationTimestamp="2026-02-01 07:37:30 +0000 UTC" firstStartedPulling="2026-02-01 07:37:32.019658999 +0000 UTC m=+925.140095433" lastFinishedPulling="2026-02-01 07:37:37.637771454 +0000 UTC m=+930.758207888" observedRunningTime="2026-02-01 07:37:38.590685091 +0000 UTC m=+931.711121535" watchObservedRunningTime="2026-02-01 07:37:38.596853621 +0000 UTC m=+931.717290095" Feb 01 07:37:38 crc kubenswrapper[4835]: I0201 07:37:38.617614 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-index-tj2nn" podStartSLOduration=4.270106433 podStartE2EDuration="9.617586451s" podCreationTimestamp="2026-02-01 07:37:29 +0000 UTC" firstStartedPulling="2026-02-01 07:37:32.307592196 +0000 UTC m=+925.428028630" lastFinishedPulling="2026-02-01 07:37:37.655072214 +0000 UTC m=+930.775508648" observedRunningTime="2026-02-01 07:37:38.611635026 +0000 UTC m=+931.732071480" watchObservedRunningTime="2026-02-01 07:37:38.617586451 +0000 UTC m=+931.738022905" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.117777 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fsrwb"] Feb 01 07:37:39 crc kubenswrapper[4835]: E0201 07:37:39.118648 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619d1e1e-0c68-4844-86de-2e62153f4f43" containerName="extract-utilities" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.118842 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="619d1e1e-0c68-4844-86de-2e62153f4f43" containerName="extract-utilities" Feb 01 07:37:39 crc kubenswrapper[4835]: E0201 07:37:39.119038 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619d1e1e-0c68-4844-86de-2e62153f4f43" containerName="registry-server" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.119186 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="619d1e1e-0c68-4844-86de-2e62153f4f43" containerName="registry-server" Feb 01 07:37:39 crc kubenswrapper[4835]: E0201 07:37:39.119361 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619d1e1e-0c68-4844-86de-2e62153f4f43" containerName="extract-content" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.119520 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="619d1e1e-0c68-4844-86de-2e62153f4f43" containerName="extract-content" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.119973 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="619d1e1e-0c68-4844-86de-2e62153f4f43" containerName="registry-server" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.122545 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.125939 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fsrwb"] Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.286117 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nqrp\" (UniqueName: \"kubernetes.io/projected/607e5b0f-62c9-4e68-9491-bd902f239991-kube-api-access-6nqrp\") pod \"certified-operators-fsrwb\" (UID: \"607e5b0f-62c9-4e68-9491-bd902f239991\") " pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.286223 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/607e5b0f-62c9-4e68-9491-bd902f239991-utilities\") pod \"certified-operators-fsrwb\" (UID: \"607e5b0f-62c9-4e68-9491-bd902f239991\") " pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.286278 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/607e5b0f-62c9-4e68-9491-bd902f239991-catalog-content\") pod \"certified-operators-fsrwb\" (UID: \"607e5b0f-62c9-4e68-9491-bd902f239991\") " pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.387753 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nqrp\" (UniqueName: \"kubernetes.io/projected/607e5b0f-62c9-4e68-9491-bd902f239991-kube-api-access-6nqrp\") pod \"certified-operators-fsrwb\" (UID: \"607e5b0f-62c9-4e68-9491-bd902f239991\") " pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.387902 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/607e5b0f-62c9-4e68-9491-bd902f239991-utilities\") pod \"certified-operators-fsrwb\" (UID: \"607e5b0f-62c9-4e68-9491-bd902f239991\") " pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.387940 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/607e5b0f-62c9-4e68-9491-bd902f239991-catalog-content\") pod \"certified-operators-fsrwb\" (UID: \"607e5b0f-62c9-4e68-9491-bd902f239991\") " pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.388589 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/607e5b0f-62c9-4e68-9491-bd902f239991-utilities\") pod \"certified-operators-fsrwb\" (UID: \"607e5b0f-62c9-4e68-9491-bd902f239991\") " pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.388734 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/607e5b0f-62c9-4e68-9491-bd902f239991-catalog-content\") pod \"certified-operators-fsrwb\" (UID: \"607e5b0f-62c9-4e68-9491-bd902f239991\") " pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.409534 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nqrp\" (UniqueName: \"kubernetes.io/projected/607e5b0f-62c9-4e68-9491-bd902f239991-kube-api-access-6nqrp\") pod \"certified-operators-fsrwb\" (UID: \"607e5b0f-62c9-4e68-9491-bd902f239991\") " pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.443849 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.854945 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/swift-operator-index-tj2nn" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.855325 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-index-tj2nn" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.888386 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/swift-operator-index-tj2nn" Feb 01 07:37:39 crc kubenswrapper[4835]: I0201 07:37:39.987093 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fsrwb"] Feb 01 07:37:40 crc kubenswrapper[4835]: I0201 07:37:40.583794 4835 generic.go:334] "Generic (PLEG): container finished" podID="607e5b0f-62c9-4e68-9491-bd902f239991" containerID="d6f37391c6bf1c76eabb143b2a8fdef766b092780b4fab3f8e9dde55e5d749bb" exitCode=0 Feb 01 07:37:40 crc kubenswrapper[4835]: I0201 07:37:40.583895 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsrwb" event={"ID":"607e5b0f-62c9-4e68-9491-bd902f239991","Type":"ContainerDied","Data":"d6f37391c6bf1c76eabb143b2a8fdef766b092780b4fab3f8e9dde55e5d749bb"} Feb 01 07:37:40 crc kubenswrapper[4835]: I0201 07:37:40.584098 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsrwb" event={"ID":"607e5b0f-62c9-4e68-9491-bd902f239991","Type":"ContainerStarted","Data":"9acfc6d0be7db460ac004d096dd484152438d2aac4cef33e843db2f818b0265e"} Feb 01 07:37:41 crc kubenswrapper[4835]: I0201 07:37:41.594980 4835 generic.go:334] "Generic (PLEG): container finished" podID="b13e8606-6ec5-4e1b-a3fd-30f8eac5809a" containerID="2b8ab5a3d71979bd71932b8afef7987524df6361e18ab704eace9a5d232c62ee" exitCode=0 Feb 01 07:37:41 crc kubenswrapper[4835]: I0201 07:37:41.595123 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-ll8z7" event={"ID":"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a","Type":"ContainerDied","Data":"2b8ab5a3d71979bd71932b8afef7987524df6361e18ab704eace9a5d232c62ee"} Feb 01 07:37:41 crc kubenswrapper[4835]: I0201 07:37:41.597498 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsrwb" event={"ID":"607e5b0f-62c9-4e68-9491-bd902f239991","Type":"ContainerStarted","Data":"d4f498772483fe868206deabdf3b9ab745fbb25a15e788dec51d4cad0dbd7e85"} Feb 01 07:37:42 crc kubenswrapper[4835]: I0201 07:37:42.609506 4835 generic.go:334] "Generic (PLEG): container finished" podID="607e5b0f-62c9-4e68-9491-bd902f239991" containerID="d4f498772483fe868206deabdf3b9ab745fbb25a15e788dec51d4cad0dbd7e85" exitCode=0 Feb 01 07:37:42 crc kubenswrapper[4835]: I0201 07:37:42.610652 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsrwb" event={"ID":"607e5b0f-62c9-4e68-9491-bd902f239991","Type":"ContainerDied","Data":"d4f498772483fe868206deabdf3b9ab745fbb25a15e788dec51d4cad0dbd7e85"} Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.018850 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-ll8z7" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.183051 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fffb\" (UniqueName: \"kubernetes.io/projected/b13e8606-6ec5-4e1b-a3fd-30f8eac5809a-kube-api-access-2fffb\") pod \"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a\" (UID: \"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a\") " Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.183296 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b13e8606-6ec5-4e1b-a3fd-30f8eac5809a-db-sync-config-data\") pod \"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a\" (UID: \"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a\") " Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.189955 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b13e8606-6ec5-4e1b-a3fd-30f8eac5809a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b13e8606-6ec5-4e1b-a3fd-30f8eac5809a" (UID: "b13e8606-6ec5-4e1b-a3fd-30f8eac5809a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.193603 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b13e8606-6ec5-4e1b-a3fd-30f8eac5809a-kube-api-access-2fffb" (OuterVolumeSpecName: "kube-api-access-2fffb") pod "b13e8606-6ec5-4e1b-a3fd-30f8eac5809a" (UID: "b13e8606-6ec5-4e1b-a3fd-30f8eac5809a"). InnerVolumeSpecName "kube-api-access-2fffb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.285289 4835 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b13e8606-6ec5-4e1b-a3fd-30f8eac5809a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.285329 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fffb\" (UniqueName: \"kubernetes.io/projected/b13e8606-6ec5-4e1b-a3fd-30f8eac5809a-kube-api-access-2fffb\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.619397 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-ll8z7" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.619435 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-ll8z7" event={"ID":"b13e8606-6ec5-4e1b-a3fd-30f8eac5809a","Type":"ContainerDied","Data":"9519543add700251272f3dd89a59c596cde81a6a29a8642f40d681e2fccdc8e6"} Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.619594 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9519543add700251272f3dd89a59c596cde81a6a29a8642f40d681e2fccdc8e6" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.623704 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsrwb" event={"ID":"607e5b0f-62c9-4e68-9491-bd902f239991","Type":"ContainerStarted","Data":"17d25ce3f624097e0960bb33314c37a8b60b68d19e13c45220790f36847d079b"} Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.651508 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fsrwb" podStartSLOduration=2.187116436 podStartE2EDuration="4.651491845s" podCreationTimestamp="2026-02-01 07:37:39 +0000 UTC" firstStartedPulling="2026-02-01 07:37:40.585162383 +0000 UTC m=+933.705598817" lastFinishedPulling="2026-02-01 07:37:43.049537762 +0000 UTC m=+936.169974226" observedRunningTime="2026-02-01 07:37:43.6474639 +0000 UTC m=+936.767900344" watchObservedRunningTime="2026-02-01 07:37:43.651491845 +0000 UTC m=+936.771928279" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.869180 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-worker-794b798997-b6znz"] Feb 01 07:37:43 crc kubenswrapper[4835]: E0201 07:37:43.869512 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b13e8606-6ec5-4e1b-a3fd-30f8eac5809a" containerName="barbican-db-sync" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.869524 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b13e8606-6ec5-4e1b-a3fd-30f8eac5809a" containerName="barbican-db-sync" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.869662 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b13e8606-6ec5-4e1b-a3fd-30f8eac5809a" containerName="barbican-db-sync" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.870379 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.872192 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-config-data" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.872378 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-worker-config-data" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.872841 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-barbican-dockercfg-jfvt4" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.880877 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6"] Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.882039 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.886575 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6"] Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.886746 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-keystone-listener-config-data" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.911945 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-worker-794b798997-b6znz"] Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.966982 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-api-6966d58856-gg77m"] Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.968193 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.982623 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-api-config-data" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.982806 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-api-6966d58856-gg77m"] Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.995119 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h274\" (UniqueName: \"kubernetes.io/projected/8653dceb-2d4e-419e-aa35-37bdca49dc2c-kube-api-access-7h274\") pod \"barbican-keystone-listener-77cb446946-46jb6\" (UID: \"8653dceb-2d4e-419e-aa35-37bdca49dc2c\") " pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.995154 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8bf5a1c-707a-4858-a716-7bc593ef0fc3-config-data-custom\") pod \"barbican-worker-794b798997-b6znz\" (UID: \"c8bf5a1c-707a-4858-a716-7bc593ef0fc3\") " pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.995191 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8bf5a1c-707a-4858-a716-7bc593ef0fc3-logs\") pod \"barbican-worker-794b798997-b6znz\" (UID: \"c8bf5a1c-707a-4858-a716-7bc593ef0fc3\") " pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.995217 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8653dceb-2d4e-419e-aa35-37bdca49dc2c-config-data\") pod \"barbican-keystone-listener-77cb446946-46jb6\" (UID: \"8653dceb-2d4e-419e-aa35-37bdca49dc2c\") " pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.995245 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9nmb\" (UniqueName: \"kubernetes.io/projected/c8bf5a1c-707a-4858-a716-7bc593ef0fc3-kube-api-access-j9nmb\") pod \"barbican-worker-794b798997-b6znz\" (UID: \"c8bf5a1c-707a-4858-a716-7bc593ef0fc3\") " pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.995272 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8653dceb-2d4e-419e-aa35-37bdca49dc2c-logs\") pod \"barbican-keystone-listener-77cb446946-46jb6\" (UID: \"8653dceb-2d4e-419e-aa35-37bdca49dc2c\") " pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.995311 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8653dceb-2d4e-419e-aa35-37bdca49dc2c-config-data-custom\") pod \"barbican-keystone-listener-77cb446946-46jb6\" (UID: \"8653dceb-2d4e-419e-aa35-37bdca49dc2c\") " pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:43 crc kubenswrapper[4835]: I0201 07:37:43.995334 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8bf5a1c-707a-4858-a716-7bc593ef0fc3-config-data\") pod \"barbican-worker-794b798997-b6znz\" (UID: \"c8bf5a1c-707a-4858-a716-7bc593ef0fc3\") " pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.096139 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9nmb\" (UniqueName: \"kubernetes.io/projected/c8bf5a1c-707a-4858-a716-7bc593ef0fc3-kube-api-access-j9nmb\") pod \"barbican-worker-794b798997-b6znz\" (UID: \"c8bf5a1c-707a-4858-a716-7bc593ef0fc3\") " pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.096381 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82m54\" (UniqueName: \"kubernetes.io/projected/6a69ee37-d1ea-4c2f-880a-1edb52d4352c-kube-api-access-82m54\") pod \"barbican-api-6966d58856-gg77m\" (UID: \"6a69ee37-d1ea-4c2f-880a-1edb52d4352c\") " pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.096490 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a69ee37-d1ea-4c2f-880a-1edb52d4352c-logs\") pod \"barbican-api-6966d58856-gg77m\" (UID: \"6a69ee37-d1ea-4c2f-880a-1edb52d4352c\") " pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.096577 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8653dceb-2d4e-419e-aa35-37bdca49dc2c-logs\") pod \"barbican-keystone-listener-77cb446946-46jb6\" (UID: \"8653dceb-2d4e-419e-aa35-37bdca49dc2c\") " pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.096678 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8653dceb-2d4e-419e-aa35-37bdca49dc2c-config-data-custom\") pod \"barbican-keystone-listener-77cb446946-46jb6\" (UID: \"8653dceb-2d4e-419e-aa35-37bdca49dc2c\") " pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.096767 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8bf5a1c-707a-4858-a716-7bc593ef0fc3-config-data\") pod \"barbican-worker-794b798997-b6znz\" (UID: \"c8bf5a1c-707a-4858-a716-7bc593ef0fc3\") " pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.096848 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a69ee37-d1ea-4c2f-880a-1edb52d4352c-config-data\") pod \"barbican-api-6966d58856-gg77m\" (UID: \"6a69ee37-d1ea-4c2f-880a-1edb52d4352c\") " pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.096928 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h274\" (UniqueName: \"kubernetes.io/projected/8653dceb-2d4e-419e-aa35-37bdca49dc2c-kube-api-access-7h274\") pod \"barbican-keystone-listener-77cb446946-46jb6\" (UID: \"8653dceb-2d4e-419e-aa35-37bdca49dc2c\") " pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.097002 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8bf5a1c-707a-4858-a716-7bc593ef0fc3-config-data-custom\") pod \"barbican-worker-794b798997-b6znz\" (UID: \"c8bf5a1c-707a-4858-a716-7bc593ef0fc3\") " pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.097081 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a69ee37-d1ea-4c2f-880a-1edb52d4352c-config-data-custom\") pod \"barbican-api-6966d58856-gg77m\" (UID: \"6a69ee37-d1ea-4c2f-880a-1edb52d4352c\") " pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.097158 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8bf5a1c-707a-4858-a716-7bc593ef0fc3-logs\") pod \"barbican-worker-794b798997-b6znz\" (UID: \"c8bf5a1c-707a-4858-a716-7bc593ef0fc3\") " pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.097234 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8653dceb-2d4e-419e-aa35-37bdca49dc2c-config-data\") pod \"barbican-keystone-listener-77cb446946-46jb6\" (UID: \"8653dceb-2d4e-419e-aa35-37bdca49dc2c\") " pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.097283 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8653dceb-2d4e-419e-aa35-37bdca49dc2c-logs\") pod \"barbican-keystone-listener-77cb446946-46jb6\" (UID: \"8653dceb-2d4e-419e-aa35-37bdca49dc2c\") " pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.097705 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8bf5a1c-707a-4858-a716-7bc593ef0fc3-logs\") pod \"barbican-worker-794b798997-b6znz\" (UID: \"c8bf5a1c-707a-4858-a716-7bc593ef0fc3\") " pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.105265 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8bf5a1c-707a-4858-a716-7bc593ef0fc3-config-data\") pod \"barbican-worker-794b798997-b6znz\" (UID: \"c8bf5a1c-707a-4858-a716-7bc593ef0fc3\") " pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.106532 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8653dceb-2d4e-419e-aa35-37bdca49dc2c-config-data\") pod \"barbican-keystone-listener-77cb446946-46jb6\" (UID: \"8653dceb-2d4e-419e-aa35-37bdca49dc2c\") " pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.108065 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8bf5a1c-707a-4858-a716-7bc593ef0fc3-config-data-custom\") pod \"barbican-worker-794b798997-b6znz\" (UID: \"c8bf5a1c-707a-4858-a716-7bc593ef0fc3\") " pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.110077 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8653dceb-2d4e-419e-aa35-37bdca49dc2c-config-data-custom\") pod \"barbican-keystone-listener-77cb446946-46jb6\" (UID: \"8653dceb-2d4e-419e-aa35-37bdca49dc2c\") " pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.128665 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9nmb\" (UniqueName: \"kubernetes.io/projected/c8bf5a1c-707a-4858-a716-7bc593ef0fc3-kube-api-access-j9nmb\") pod \"barbican-worker-794b798997-b6znz\" (UID: \"c8bf5a1c-707a-4858-a716-7bc593ef0fc3\") " pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.176082 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h274\" (UniqueName: \"kubernetes.io/projected/8653dceb-2d4e-419e-aa35-37bdca49dc2c-kube-api-access-7h274\") pod \"barbican-keystone-listener-77cb446946-46jb6\" (UID: \"8653dceb-2d4e-419e-aa35-37bdca49dc2c\") " pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.198223 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a69ee37-d1ea-4c2f-880a-1edb52d4352c-config-data\") pod \"barbican-api-6966d58856-gg77m\" (UID: \"6a69ee37-d1ea-4c2f-880a-1edb52d4352c\") " pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.198542 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a69ee37-d1ea-4c2f-880a-1edb52d4352c-config-data-custom\") pod \"barbican-api-6966d58856-gg77m\" (UID: \"6a69ee37-d1ea-4c2f-880a-1edb52d4352c\") " pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.198683 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82m54\" (UniqueName: \"kubernetes.io/projected/6a69ee37-d1ea-4c2f-880a-1edb52d4352c-kube-api-access-82m54\") pod \"barbican-api-6966d58856-gg77m\" (UID: \"6a69ee37-d1ea-4c2f-880a-1edb52d4352c\") " pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.198782 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a69ee37-d1ea-4c2f-880a-1edb52d4352c-logs\") pod \"barbican-api-6966d58856-gg77m\" (UID: \"6a69ee37-d1ea-4c2f-880a-1edb52d4352c\") " pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.199227 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a69ee37-d1ea-4c2f-880a-1edb52d4352c-logs\") pod \"barbican-api-6966d58856-gg77m\" (UID: \"6a69ee37-d1ea-4c2f-880a-1edb52d4352c\") " pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.201843 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a69ee37-d1ea-4c2f-880a-1edb52d4352c-config-data-custom\") pod \"barbican-api-6966d58856-gg77m\" (UID: \"6a69ee37-d1ea-4c2f-880a-1edb52d4352c\") " pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.202189 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a69ee37-d1ea-4c2f-880a-1edb52d4352c-config-data\") pod \"barbican-api-6966d58856-gg77m\" (UID: \"6a69ee37-d1ea-4c2f-880a-1edb52d4352c\") " pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.204641 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.215878 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.225919 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82m54\" (UniqueName: \"kubernetes.io/projected/6a69ee37-d1ea-4c2f-880a-1edb52d4352c-kube-api-access-82m54\") pod \"barbican-api-6966d58856-gg77m\" (UID: \"6a69ee37-d1ea-4c2f-880a-1edb52d4352c\") " pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.279680 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.554063 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6"] Feb 01 07:37:44 crc kubenswrapper[4835]: W0201 07:37:44.565361 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8653dceb_2d4e_419e_aa35_37bdca49dc2c.slice/crio-63c3ce822aa5a38e5757556c4ca57aa00c93048fb3884613914d9646af24c806 WatchSource:0}: Error finding container 63c3ce822aa5a38e5757556c4ca57aa00c93048fb3884613914d9646af24c806: Status 404 returned error can't find the container with id 63c3ce822aa5a38e5757556c4ca57aa00c93048fb3884613914d9646af24c806 Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.604780 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-api-6966d58856-gg77m"] Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.634641 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" event={"ID":"8653dceb-2d4e-419e-aa35-37bdca49dc2c","Type":"ContainerStarted","Data":"63c3ce822aa5a38e5757556c4ca57aa00c93048fb3884613914d9646af24c806"} Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.636624 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" event={"ID":"6a69ee37-d1ea-4c2f-880a-1edb52d4352c","Type":"ContainerStarted","Data":"63484c50bc011e64f76e80d08706847ecf383710bbc20b9bc1954d22e851a72b"} Feb 01 07:37:44 crc kubenswrapper[4835]: I0201 07:37:44.659470 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-worker-794b798997-b6znz"] Feb 01 07:37:44 crc kubenswrapper[4835]: W0201 07:37:44.664105 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8bf5a1c_707a_4858_a716_7bc593ef0fc3.slice/crio-f1ab31cb5b21b1aea1ce644128a9b00fbf971979563ea56f1dbc928a4f19ce1b WatchSource:0}: Error finding container f1ab31cb5b21b1aea1ce644128a9b00fbf971979563ea56f1dbc928a4f19ce1b: Status 404 returned error can't find the container with id f1ab31cb5b21b1aea1ce644128a9b00fbf971979563ea56f1dbc928a4f19ce1b Feb 01 07:37:45 crc kubenswrapper[4835]: I0201 07:37:45.645962 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" event={"ID":"c8bf5a1c-707a-4858-a716-7bc593ef0fc3","Type":"ContainerStarted","Data":"f1ab31cb5b21b1aea1ce644128a9b00fbf971979563ea56f1dbc928a4f19ce1b"} Feb 01 07:37:45 crc kubenswrapper[4835]: I0201 07:37:45.852494 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:45 crc kubenswrapper[4835]: I0201 07:37:45.852577 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:45 crc kubenswrapper[4835]: I0201 07:37:45.901009 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:46 crc kubenswrapper[4835]: I0201 07:37:46.714196 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w65gv" Feb 01 07:37:47 crc kubenswrapper[4835]: I0201 07:37:47.596091 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w65gv"] Feb 01 07:37:47 crc kubenswrapper[4835]: I0201 07:37:47.665294 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" event={"ID":"6a69ee37-d1ea-4c2f-880a-1edb52d4352c","Type":"ContainerStarted","Data":"f5b8ee84687ed8aca28c18c3766e832ef3a4c90568a55d15a8379eeace0bb974"} Feb 01 07:37:48 crc kubenswrapper[4835]: I0201 07:37:48.310658 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5blqv"] Feb 01 07:37:48 crc kubenswrapper[4835]: I0201 07:37:48.311552 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5blqv" podUID="48972eb7-80de-4d1a-b9c1-adf412bd3531" containerName="registry-server" containerID="cri-o://4b4accff2f1a20d0e288fd1c22d16a0996201d0dc3273c256de8cfeb83f7a5c2" gracePeriod=2 Feb 01 07:37:48 crc kubenswrapper[4835]: I0201 07:37:48.681451 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" event={"ID":"6a69ee37-d1ea-4c2f-880a-1edb52d4352c","Type":"ContainerStarted","Data":"4491c22e6fe8d03497b625c05fa97472dbd12f6f97b5767941284a9200d468a1"} Feb 01 07:37:48 crc kubenswrapper[4835]: I0201 07:37:48.682583 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:48 crc kubenswrapper[4835]: I0201 07:37:48.682611 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:48 crc kubenswrapper[4835]: I0201 07:37:48.690259 4835 generic.go:334] "Generic (PLEG): container finished" podID="48972eb7-80de-4d1a-b9c1-adf412bd3531" containerID="4b4accff2f1a20d0e288fd1c22d16a0996201d0dc3273c256de8cfeb83f7a5c2" exitCode=0 Feb 01 07:37:48 crc kubenswrapper[4835]: I0201 07:37:48.690370 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5blqv" event={"ID":"48972eb7-80de-4d1a-b9c1-adf412bd3531","Type":"ContainerDied","Data":"4b4accff2f1a20d0e288fd1c22d16a0996201d0dc3273c256de8cfeb83f7a5c2"} Feb 01 07:37:48 crc kubenswrapper[4835]: I0201 07:37:48.707360 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" podStartSLOduration=5.70734596 podStartE2EDuration="5.70734596s" podCreationTimestamp="2026-02-01 07:37:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:37:48.704898916 +0000 UTC m=+941.825335350" watchObservedRunningTime="2026-02-01 07:37:48.70734596 +0000 UTC m=+941.827782394" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.018429 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.178819 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62w42\" (UniqueName: \"kubernetes.io/projected/48972eb7-80de-4d1a-b9c1-adf412bd3531-kube-api-access-62w42\") pod \"48972eb7-80de-4d1a-b9c1-adf412bd3531\" (UID: \"48972eb7-80de-4d1a-b9c1-adf412bd3531\") " Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.178889 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48972eb7-80de-4d1a-b9c1-adf412bd3531-utilities\") pod \"48972eb7-80de-4d1a-b9c1-adf412bd3531\" (UID: \"48972eb7-80de-4d1a-b9c1-adf412bd3531\") " Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.179036 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48972eb7-80de-4d1a-b9c1-adf412bd3531-catalog-content\") pod \"48972eb7-80de-4d1a-b9c1-adf412bd3531\" (UID: \"48972eb7-80de-4d1a-b9c1-adf412bd3531\") " Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.179587 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48972eb7-80de-4d1a-b9c1-adf412bd3531-utilities" (OuterVolumeSpecName: "utilities") pod "48972eb7-80de-4d1a-b9c1-adf412bd3531" (UID: "48972eb7-80de-4d1a-b9c1-adf412bd3531"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.184641 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48972eb7-80de-4d1a-b9c1-adf412bd3531-kube-api-access-62w42" (OuterVolumeSpecName: "kube-api-access-62w42") pod "48972eb7-80de-4d1a-b9c1-adf412bd3531" (UID: "48972eb7-80de-4d1a-b9c1-adf412bd3531"). InnerVolumeSpecName "kube-api-access-62w42". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.234398 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48972eb7-80de-4d1a-b9c1-adf412bd3531-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48972eb7-80de-4d1a-b9c1-adf412bd3531" (UID: "48972eb7-80de-4d1a-b9c1-adf412bd3531"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.280299 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48972eb7-80de-4d1a-b9c1-adf412bd3531-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.280327 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62w42\" (UniqueName: \"kubernetes.io/projected/48972eb7-80de-4d1a-b9c1-adf412bd3531-kube-api-access-62w42\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.280338 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48972eb7-80de-4d1a-b9c1-adf412bd3531-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.444292 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.444334 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.494187 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.701036 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" event={"ID":"8653dceb-2d4e-419e-aa35-37bdca49dc2c","Type":"ContainerStarted","Data":"e8b8e34d7de9a1640fffa41d3521471455d5d34db5f3b102a283b235dd926882"} Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.702600 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" event={"ID":"c8bf5a1c-707a-4858-a716-7bc593ef0fc3","Type":"ContainerStarted","Data":"789ce88b44a0f90da4252454c52e4c6c9443355b68015994137c57083a98f4e5"} Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.705071 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5blqv" event={"ID":"48972eb7-80de-4d1a-b9c1-adf412bd3531","Type":"ContainerDied","Data":"a97613cab5446cbb6022f66ef99ec2081a9134140b311ba32389e80a2e221cbc"} Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.705135 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5blqv" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.705175 4835 scope.go:117] "RemoveContainer" containerID="4b4accff2f1a20d0e288fd1c22d16a0996201d0dc3273c256de8cfeb83f7a5c2" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.730468 4835 scope.go:117] "RemoveContainer" containerID="00639fbfdc8c05a878182afacfc54aac4d6d97d80b8d202f1d59fcc0b702129d" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.735985 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5blqv"] Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.742243 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5blqv"] Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.769068 4835 scope.go:117] "RemoveContainer" containerID="d5e2f5d1534650a4cf1433bf132faf98e02e52decf048ace44fbb7b0f61e32fe" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.797521 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:49 crc kubenswrapper[4835]: I0201 07:37:49.886870 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-index-tj2nn" Feb 01 07:37:50 crc kubenswrapper[4835]: I0201 07:37:50.713710 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" event={"ID":"8653dceb-2d4e-419e-aa35-37bdca49dc2c","Type":"ContainerStarted","Data":"0d322a883867b3891f61684cb07c6a5d3acb96769e7caab946e9d0a9c59890a4"} Feb 01 07:37:50 crc kubenswrapper[4835]: I0201 07:37:50.715669 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" event={"ID":"c8bf5a1c-707a-4858-a716-7bc593ef0fc3","Type":"ContainerStarted","Data":"84d14344b79c1cc2799abe8f41203ecef8883d25ad9a2e7cc85c543fa2ae24d2"} Feb 01 07:37:50 crc kubenswrapper[4835]: I0201 07:37:50.744649 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-keystone-listener-77cb446946-46jb6" podStartSLOduration=2.873613706 podStartE2EDuration="7.744624423s" podCreationTimestamp="2026-02-01 07:37:43 +0000 UTC" firstStartedPulling="2026-02-01 07:37:44.567251687 +0000 UTC m=+937.687688121" lastFinishedPulling="2026-02-01 07:37:49.438262404 +0000 UTC m=+942.558698838" observedRunningTime="2026-02-01 07:37:50.735029694 +0000 UTC m=+943.855466178" watchObservedRunningTime="2026-02-01 07:37:50.744624423 +0000 UTC m=+943.865060867" Feb 01 07:37:50 crc kubenswrapper[4835]: I0201 07:37:50.757762 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-worker-794b798997-b6znz" podStartSLOduration=2.983854232 podStartE2EDuration="7.757740044s" podCreationTimestamp="2026-02-01 07:37:43 +0000 UTC" firstStartedPulling="2026-02-01 07:37:44.666373684 +0000 UTC m=+937.786810128" lastFinishedPulling="2026-02-01 07:37:49.440259506 +0000 UTC m=+942.560695940" observedRunningTime="2026-02-01 07:37:50.75409817 +0000 UTC m=+943.874534654" watchObservedRunningTime="2026-02-01 07:37:50.757740044 +0000 UTC m=+943.878176498" Feb 01 07:37:51 crc kubenswrapper[4835]: I0201 07:37:51.576555 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48972eb7-80de-4d1a-b9c1-adf412bd3531" path="/var/lib/kubelet/pods/48972eb7-80de-4d1a-b9c1-adf412bd3531/volumes" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.108218 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fsrwb"] Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.109825 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fsrwb" podUID="607e5b0f-62c9-4e68-9491-bd902f239991" containerName="registry-server" containerID="cri-o://17d25ce3f624097e0960bb33314c37a8b60b68d19e13c45220790f36847d079b" gracePeriod=2 Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.582220 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.657800 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/607e5b0f-62c9-4e68-9491-bd902f239991-catalog-content\") pod \"607e5b0f-62c9-4e68-9491-bd902f239991\" (UID: \"607e5b0f-62c9-4e68-9491-bd902f239991\") " Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.658095 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/607e5b0f-62c9-4e68-9491-bd902f239991-utilities\") pod \"607e5b0f-62c9-4e68-9491-bd902f239991\" (UID: \"607e5b0f-62c9-4e68-9491-bd902f239991\") " Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.658180 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nqrp\" (UniqueName: \"kubernetes.io/projected/607e5b0f-62c9-4e68-9491-bd902f239991-kube-api-access-6nqrp\") pod \"607e5b0f-62c9-4e68-9491-bd902f239991\" (UID: \"607e5b0f-62c9-4e68-9491-bd902f239991\") " Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.662210 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/607e5b0f-62c9-4e68-9491-bd902f239991-utilities" (OuterVolumeSpecName: "utilities") pod "607e5b0f-62c9-4e68-9491-bd902f239991" (UID: "607e5b0f-62c9-4e68-9491-bd902f239991"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.680601 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/607e5b0f-62c9-4e68-9491-bd902f239991-kube-api-access-6nqrp" (OuterVolumeSpecName: "kube-api-access-6nqrp") pod "607e5b0f-62c9-4e68-9491-bd902f239991" (UID: "607e5b0f-62c9-4e68-9491-bd902f239991"). InnerVolumeSpecName "kube-api-access-6nqrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.746886 4835 generic.go:334] "Generic (PLEG): container finished" podID="607e5b0f-62c9-4e68-9491-bd902f239991" containerID="17d25ce3f624097e0960bb33314c37a8b60b68d19e13c45220790f36847d079b" exitCode=0 Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.746928 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsrwb" event={"ID":"607e5b0f-62c9-4e68-9491-bd902f239991","Type":"ContainerDied","Data":"17d25ce3f624097e0960bb33314c37a8b60b68d19e13c45220790f36847d079b"} Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.746973 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsrwb" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.747003 4835 scope.go:117] "RemoveContainer" containerID="17d25ce3f624097e0960bb33314c37a8b60b68d19e13c45220790f36847d079b" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.746990 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsrwb" event={"ID":"607e5b0f-62c9-4e68-9491-bd902f239991","Type":"ContainerDied","Data":"9acfc6d0be7db460ac004d096dd484152438d2aac4cef33e843db2f818b0265e"} Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.759558 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nqrp\" (UniqueName: \"kubernetes.io/projected/607e5b0f-62c9-4e68-9491-bd902f239991-kube-api-access-6nqrp\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.759590 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/607e5b0f-62c9-4e68-9491-bd902f239991-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.765225 4835 scope.go:117] "RemoveContainer" containerID="d4f498772483fe868206deabdf3b9ab745fbb25a15e788dec51d4cad0dbd7e85" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.783462 4835 scope.go:117] "RemoveContainer" containerID="d6f37391c6bf1c76eabb143b2a8fdef766b092780b4fab3f8e9dde55e5d749bb" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.814144 4835 scope.go:117] "RemoveContainer" containerID="17d25ce3f624097e0960bb33314c37a8b60b68d19e13c45220790f36847d079b" Feb 01 07:37:53 crc kubenswrapper[4835]: E0201 07:37:53.819539 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17d25ce3f624097e0960bb33314c37a8b60b68d19e13c45220790f36847d079b\": container with ID starting with 17d25ce3f624097e0960bb33314c37a8b60b68d19e13c45220790f36847d079b not found: ID does not exist" containerID="17d25ce3f624097e0960bb33314c37a8b60b68d19e13c45220790f36847d079b" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.819582 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17d25ce3f624097e0960bb33314c37a8b60b68d19e13c45220790f36847d079b"} err="failed to get container status \"17d25ce3f624097e0960bb33314c37a8b60b68d19e13c45220790f36847d079b\": rpc error: code = NotFound desc = could not find container \"17d25ce3f624097e0960bb33314c37a8b60b68d19e13c45220790f36847d079b\": container with ID starting with 17d25ce3f624097e0960bb33314c37a8b60b68d19e13c45220790f36847d079b not found: ID does not exist" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.819606 4835 scope.go:117] "RemoveContainer" containerID="d4f498772483fe868206deabdf3b9ab745fbb25a15e788dec51d4cad0dbd7e85" Feb 01 07:37:53 crc kubenswrapper[4835]: E0201 07:37:53.820691 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4f498772483fe868206deabdf3b9ab745fbb25a15e788dec51d4cad0dbd7e85\": container with ID starting with d4f498772483fe868206deabdf3b9ab745fbb25a15e788dec51d4cad0dbd7e85 not found: ID does not exist" containerID="d4f498772483fe868206deabdf3b9ab745fbb25a15e788dec51d4cad0dbd7e85" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.820733 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4f498772483fe868206deabdf3b9ab745fbb25a15e788dec51d4cad0dbd7e85"} err="failed to get container status \"d4f498772483fe868206deabdf3b9ab745fbb25a15e788dec51d4cad0dbd7e85\": rpc error: code = NotFound desc = could not find container \"d4f498772483fe868206deabdf3b9ab745fbb25a15e788dec51d4cad0dbd7e85\": container with ID starting with d4f498772483fe868206deabdf3b9ab745fbb25a15e788dec51d4cad0dbd7e85 not found: ID does not exist" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.820763 4835 scope.go:117] "RemoveContainer" containerID="d6f37391c6bf1c76eabb143b2a8fdef766b092780b4fab3f8e9dde55e5d749bb" Feb 01 07:37:53 crc kubenswrapper[4835]: E0201 07:37:53.821195 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6f37391c6bf1c76eabb143b2a8fdef766b092780b4fab3f8e9dde55e5d749bb\": container with ID starting with d6f37391c6bf1c76eabb143b2a8fdef766b092780b4fab3f8e9dde55e5d749bb not found: ID does not exist" containerID="d6f37391c6bf1c76eabb143b2a8fdef766b092780b4fab3f8e9dde55e5d749bb" Feb 01 07:37:53 crc kubenswrapper[4835]: I0201 07:37:53.821233 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6f37391c6bf1c76eabb143b2a8fdef766b092780b4fab3f8e9dde55e5d749bb"} err="failed to get container status \"d6f37391c6bf1c76eabb143b2a8fdef766b092780b4fab3f8e9dde55e5d749bb\": rpc error: code = NotFound desc = could not find container \"d6f37391c6bf1c76eabb143b2a8fdef766b092780b4fab3f8e9dde55e5d749bb\": container with ID starting with d6f37391c6bf1c76eabb143b2a8fdef766b092780b4fab3f8e9dde55e5d749bb not found: ID does not exist" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.079544 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/607e5b0f-62c9-4e68-9491-bd902f239991-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "607e5b0f-62c9-4e68-9491-bd902f239991" (UID: "607e5b0f-62c9-4e68-9491-bd902f239991"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.165397 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/607e5b0f-62c9-4e68-9491-bd902f239991-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.380461 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fsrwb"] Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.388634 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fsrwb"] Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.765786 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5"] Feb 01 07:37:54 crc kubenswrapper[4835]: E0201 07:37:54.766350 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="607e5b0f-62c9-4e68-9491-bd902f239991" containerName="extract-utilities" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.766365 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="607e5b0f-62c9-4e68-9491-bd902f239991" containerName="extract-utilities" Feb 01 07:37:54 crc kubenswrapper[4835]: E0201 07:37:54.766386 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48972eb7-80de-4d1a-b9c1-adf412bd3531" containerName="registry-server" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.766392 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="48972eb7-80de-4d1a-b9c1-adf412bd3531" containerName="registry-server" Feb 01 07:37:54 crc kubenswrapper[4835]: E0201 07:37:54.766403 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="607e5b0f-62c9-4e68-9491-bd902f239991" containerName="registry-server" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.766425 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="607e5b0f-62c9-4e68-9491-bd902f239991" containerName="registry-server" Feb 01 07:37:54 crc kubenswrapper[4835]: E0201 07:37:54.766435 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="607e5b0f-62c9-4e68-9491-bd902f239991" containerName="extract-content" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.766441 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="607e5b0f-62c9-4e68-9491-bd902f239991" containerName="extract-content" Feb 01 07:37:54 crc kubenswrapper[4835]: E0201 07:37:54.766450 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48972eb7-80de-4d1a-b9c1-adf412bd3531" containerName="extract-utilities" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.766455 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="48972eb7-80de-4d1a-b9c1-adf412bd3531" containerName="extract-utilities" Feb 01 07:37:54 crc kubenswrapper[4835]: E0201 07:37:54.766467 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48972eb7-80de-4d1a-b9c1-adf412bd3531" containerName="extract-content" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.766472 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="48972eb7-80de-4d1a-b9c1-adf412bd3531" containerName="extract-content" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.766592 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="607e5b0f-62c9-4e68-9491-bd902f239991" containerName="registry-server" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.766606 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="48972eb7-80de-4d1a-b9c1-adf412bd3531" containerName="registry-server" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.767487 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.770467 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-j4xxm" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.781844 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5"] Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.874616 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/846fe1f2-f96b-4447-9336-d58ac094d486-bundle\") pod \"ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5\" (UID: \"846fe1f2-f96b-4447-9336-d58ac094d486\") " pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.874952 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg6d8\" (UniqueName: \"kubernetes.io/projected/846fe1f2-f96b-4447-9336-d58ac094d486-kube-api-access-sg6d8\") pod \"ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5\" (UID: \"846fe1f2-f96b-4447-9336-d58ac094d486\") " pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.875099 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/846fe1f2-f96b-4447-9336-d58ac094d486-util\") pod \"ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5\" (UID: \"846fe1f2-f96b-4447-9336-d58ac094d486\") " pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.976598 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg6d8\" (UniqueName: \"kubernetes.io/projected/846fe1f2-f96b-4447-9336-d58ac094d486-kube-api-access-sg6d8\") pod \"ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5\" (UID: \"846fe1f2-f96b-4447-9336-d58ac094d486\") " pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.977321 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/846fe1f2-f96b-4447-9336-d58ac094d486-util\") pod \"ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5\" (UID: \"846fe1f2-f96b-4447-9336-d58ac094d486\") " pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.977940 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/846fe1f2-f96b-4447-9336-d58ac094d486-util\") pod \"ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5\" (UID: \"846fe1f2-f96b-4447-9336-d58ac094d486\") " pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.978501 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/846fe1f2-f96b-4447-9336-d58ac094d486-bundle\") pod \"ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5\" (UID: \"846fe1f2-f96b-4447-9336-d58ac094d486\") " pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" Feb 01 07:37:54 crc kubenswrapper[4835]: I0201 07:37:54.978623 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/846fe1f2-f96b-4447-9336-d58ac094d486-bundle\") pod \"ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5\" (UID: \"846fe1f2-f96b-4447-9336-d58ac094d486\") " pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" Feb 01 07:37:55 crc kubenswrapper[4835]: I0201 07:37:55.002466 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg6d8\" (UniqueName: \"kubernetes.io/projected/846fe1f2-f96b-4447-9336-d58ac094d486-kube-api-access-sg6d8\") pod \"ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5\" (UID: \"846fe1f2-f96b-4447-9336-d58ac094d486\") " pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" Feb 01 07:37:55 crc kubenswrapper[4835]: I0201 07:37:55.088794 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" Feb 01 07:37:55 crc kubenswrapper[4835]: I0201 07:37:55.195251 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:37:55 crc kubenswrapper[4835]: I0201 07:37:55.195319 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:37:55 crc kubenswrapper[4835]: W0201 07:37:55.382274 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod846fe1f2_f96b_4447_9336_d58ac094d486.slice/crio-6a623df9300c3a987a49e17cbb0ddefbc2c603076d04262ccb90daf41befab43 WatchSource:0}: Error finding container 6a623df9300c3a987a49e17cbb0ddefbc2c603076d04262ccb90daf41befab43: Status 404 returned error can't find the container with id 6a623df9300c3a987a49e17cbb0ddefbc2c603076d04262ccb90daf41befab43 Feb 01 07:37:55 crc kubenswrapper[4835]: I0201 07:37:55.393562 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5"] Feb 01 07:37:55 crc kubenswrapper[4835]: I0201 07:37:55.576626 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="607e5b0f-62c9-4e68-9491-bd902f239991" path="/var/lib/kubelet/pods/607e5b0f-62c9-4e68-9491-bd902f239991/volumes" Feb 01 07:37:55 crc kubenswrapper[4835]: I0201 07:37:55.654927 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:55 crc kubenswrapper[4835]: I0201 07:37:55.665926 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/barbican-api-6966d58856-gg77m" Feb 01 07:37:55 crc kubenswrapper[4835]: I0201 07:37:55.768661 4835 generic.go:334] "Generic (PLEG): container finished" podID="846fe1f2-f96b-4447-9336-d58ac094d486" containerID="1b077152679f61684216febdeb298224f68b80e1c19f22bc8bc12d2392a4404e" exitCode=0 Feb 01 07:37:55 crc kubenswrapper[4835]: I0201 07:37:55.769600 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" event={"ID":"846fe1f2-f96b-4447-9336-d58ac094d486","Type":"ContainerDied","Data":"1b077152679f61684216febdeb298224f68b80e1c19f22bc8bc12d2392a4404e"} Feb 01 07:37:55 crc kubenswrapper[4835]: I0201 07:37:55.769625 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" event={"ID":"846fe1f2-f96b-4447-9336-d58ac094d486","Type":"ContainerStarted","Data":"6a623df9300c3a987a49e17cbb0ddefbc2c603076d04262ccb90daf41befab43"} Feb 01 07:37:56 crc kubenswrapper[4835]: I0201 07:37:56.778574 4835 generic.go:334] "Generic (PLEG): container finished" podID="846fe1f2-f96b-4447-9336-d58ac094d486" containerID="ca6222a2ba1c866e30bf8acbe47b4077ad304afecf74b62ff428461243b2d713" exitCode=0 Feb 01 07:37:56 crc kubenswrapper[4835]: I0201 07:37:56.778625 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" event={"ID":"846fe1f2-f96b-4447-9336-d58ac094d486","Type":"ContainerDied","Data":"ca6222a2ba1c866e30bf8acbe47b4077ad304afecf74b62ff428461243b2d713"} Feb 01 07:37:57 crc kubenswrapper[4835]: I0201 07:37:57.791370 4835 generic.go:334] "Generic (PLEG): container finished" podID="846fe1f2-f96b-4447-9336-d58ac094d486" containerID="6c373212d8d8abaae56569d8aa1acf8724c03fadcbcbe09fe69771c9bf4e7225" exitCode=0 Feb 01 07:37:57 crc kubenswrapper[4835]: I0201 07:37:57.791701 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" event={"ID":"846fe1f2-f96b-4447-9336-d58ac094d486","Type":"ContainerDied","Data":"6c373212d8d8abaae56569d8aa1acf8724c03fadcbcbe09fe69771c9bf4e7225"} Feb 01 07:37:59 crc kubenswrapper[4835]: I0201 07:37:59.204966 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" Feb 01 07:37:59 crc kubenswrapper[4835]: I0201 07:37:59.269300 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/846fe1f2-f96b-4447-9336-d58ac094d486-bundle\") pod \"846fe1f2-f96b-4447-9336-d58ac094d486\" (UID: \"846fe1f2-f96b-4447-9336-d58ac094d486\") " Feb 01 07:37:59 crc kubenswrapper[4835]: I0201 07:37:59.269491 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg6d8\" (UniqueName: \"kubernetes.io/projected/846fe1f2-f96b-4447-9336-d58ac094d486-kube-api-access-sg6d8\") pod \"846fe1f2-f96b-4447-9336-d58ac094d486\" (UID: \"846fe1f2-f96b-4447-9336-d58ac094d486\") " Feb 01 07:37:59 crc kubenswrapper[4835]: I0201 07:37:59.269539 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/846fe1f2-f96b-4447-9336-d58ac094d486-util\") pod \"846fe1f2-f96b-4447-9336-d58ac094d486\" (UID: \"846fe1f2-f96b-4447-9336-d58ac094d486\") " Feb 01 07:37:59 crc kubenswrapper[4835]: I0201 07:37:59.270550 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/846fe1f2-f96b-4447-9336-d58ac094d486-bundle" (OuterVolumeSpecName: "bundle") pod "846fe1f2-f96b-4447-9336-d58ac094d486" (UID: "846fe1f2-f96b-4447-9336-d58ac094d486"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:37:59 crc kubenswrapper[4835]: I0201 07:37:59.276602 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/846fe1f2-f96b-4447-9336-d58ac094d486-kube-api-access-sg6d8" (OuterVolumeSpecName: "kube-api-access-sg6d8") pod "846fe1f2-f96b-4447-9336-d58ac094d486" (UID: "846fe1f2-f96b-4447-9336-d58ac094d486"). InnerVolumeSpecName "kube-api-access-sg6d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:37:59 crc kubenswrapper[4835]: I0201 07:37:59.302397 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/846fe1f2-f96b-4447-9336-d58ac094d486-util" (OuterVolumeSpecName: "util") pod "846fe1f2-f96b-4447-9336-d58ac094d486" (UID: "846fe1f2-f96b-4447-9336-d58ac094d486"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:37:59 crc kubenswrapper[4835]: I0201 07:37:59.371950 4835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/846fe1f2-f96b-4447-9336-d58ac094d486-bundle\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:59 crc kubenswrapper[4835]: I0201 07:37:59.372009 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg6d8\" (UniqueName: \"kubernetes.io/projected/846fe1f2-f96b-4447-9336-d58ac094d486-kube-api-access-sg6d8\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:59 crc kubenswrapper[4835]: I0201 07:37:59.372032 4835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/846fe1f2-f96b-4447-9336-d58ac094d486-util\") on node \"crc\" DevicePath \"\"" Feb 01 07:37:59 crc kubenswrapper[4835]: I0201 07:37:59.822631 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" event={"ID":"846fe1f2-f96b-4447-9336-d58ac094d486","Type":"ContainerDied","Data":"6a623df9300c3a987a49e17cbb0ddefbc2c603076d04262ccb90daf41befab43"} Feb 01 07:37:59 crc kubenswrapper[4835]: I0201 07:37:59.822690 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a623df9300c3a987a49e17cbb0ddefbc2c603076d04262ccb90daf41befab43" Feb 01 07:37:59 crc kubenswrapper[4835]: I0201 07:37:59.822751 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5" Feb 01 07:38:09 crc kubenswrapper[4835]: I0201 07:38:09.951099 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r"] Feb 01 07:38:09 crc kubenswrapper[4835]: E0201 07:38:09.952241 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="846fe1f2-f96b-4447-9336-d58ac094d486" containerName="pull" Feb 01 07:38:09 crc kubenswrapper[4835]: I0201 07:38:09.952261 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="846fe1f2-f96b-4447-9336-d58ac094d486" containerName="pull" Feb 01 07:38:09 crc kubenswrapper[4835]: E0201 07:38:09.952286 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="846fe1f2-f96b-4447-9336-d58ac094d486" containerName="util" Feb 01 07:38:09 crc kubenswrapper[4835]: I0201 07:38:09.952298 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="846fe1f2-f96b-4447-9336-d58ac094d486" containerName="util" Feb 01 07:38:09 crc kubenswrapper[4835]: E0201 07:38:09.952316 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="846fe1f2-f96b-4447-9336-d58ac094d486" containerName="extract" Feb 01 07:38:09 crc kubenswrapper[4835]: I0201 07:38:09.952329 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="846fe1f2-f96b-4447-9336-d58ac094d486" containerName="extract" Feb 01 07:38:09 crc kubenswrapper[4835]: I0201 07:38:09.952566 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="846fe1f2-f96b-4447-9336-d58ac094d486" containerName="extract" Feb 01 07:38:09 crc kubenswrapper[4835]: I0201 07:38:09.953208 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" Feb 01 07:38:09 crc kubenswrapper[4835]: I0201 07:38:09.955618 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-wsg8l" Feb 01 07:38:09 crc kubenswrapper[4835]: I0201 07:38:09.956141 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-service-cert" Feb 01 07:38:09 crc kubenswrapper[4835]: I0201 07:38:09.971124 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r"] Feb 01 07:38:10 crc kubenswrapper[4835]: I0201 07:38:10.043884 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/26de1ab5-eb0d-4fe4-83ad-25f2262bd958-apiservice-cert\") pod \"swift-operator-controller-manager-7b5bf4689c-j4d4r\" (UID: \"26de1ab5-eb0d-4fe4-83ad-25f2262bd958\") " pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" Feb 01 07:38:10 crc kubenswrapper[4835]: I0201 07:38:10.043962 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5t64\" (UniqueName: \"kubernetes.io/projected/26de1ab5-eb0d-4fe4-83ad-25f2262bd958-kube-api-access-j5t64\") pod \"swift-operator-controller-manager-7b5bf4689c-j4d4r\" (UID: \"26de1ab5-eb0d-4fe4-83ad-25f2262bd958\") " pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" Feb 01 07:38:10 crc kubenswrapper[4835]: I0201 07:38:10.043989 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/26de1ab5-eb0d-4fe4-83ad-25f2262bd958-webhook-cert\") pod \"swift-operator-controller-manager-7b5bf4689c-j4d4r\" (UID: \"26de1ab5-eb0d-4fe4-83ad-25f2262bd958\") " pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" Feb 01 07:38:10 crc kubenswrapper[4835]: I0201 07:38:10.145331 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5t64\" (UniqueName: \"kubernetes.io/projected/26de1ab5-eb0d-4fe4-83ad-25f2262bd958-kube-api-access-j5t64\") pod \"swift-operator-controller-manager-7b5bf4689c-j4d4r\" (UID: \"26de1ab5-eb0d-4fe4-83ad-25f2262bd958\") " pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" Feb 01 07:38:10 crc kubenswrapper[4835]: I0201 07:38:10.145389 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/26de1ab5-eb0d-4fe4-83ad-25f2262bd958-webhook-cert\") pod \"swift-operator-controller-manager-7b5bf4689c-j4d4r\" (UID: \"26de1ab5-eb0d-4fe4-83ad-25f2262bd958\") " pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" Feb 01 07:38:10 crc kubenswrapper[4835]: I0201 07:38:10.145495 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/26de1ab5-eb0d-4fe4-83ad-25f2262bd958-apiservice-cert\") pod \"swift-operator-controller-manager-7b5bf4689c-j4d4r\" (UID: \"26de1ab5-eb0d-4fe4-83ad-25f2262bd958\") " pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" Feb 01 07:38:10 crc kubenswrapper[4835]: I0201 07:38:10.154138 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/26de1ab5-eb0d-4fe4-83ad-25f2262bd958-webhook-cert\") pod \"swift-operator-controller-manager-7b5bf4689c-j4d4r\" (UID: \"26de1ab5-eb0d-4fe4-83ad-25f2262bd958\") " pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" Feb 01 07:38:10 crc kubenswrapper[4835]: I0201 07:38:10.154559 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/26de1ab5-eb0d-4fe4-83ad-25f2262bd958-apiservice-cert\") pod \"swift-operator-controller-manager-7b5bf4689c-j4d4r\" (UID: \"26de1ab5-eb0d-4fe4-83ad-25f2262bd958\") " pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" Feb 01 07:38:10 crc kubenswrapper[4835]: I0201 07:38:10.164170 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5t64\" (UniqueName: \"kubernetes.io/projected/26de1ab5-eb0d-4fe4-83ad-25f2262bd958-kube-api-access-j5t64\") pod \"swift-operator-controller-manager-7b5bf4689c-j4d4r\" (UID: \"26de1ab5-eb0d-4fe4-83ad-25f2262bd958\") " pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" Feb 01 07:38:10 crc kubenswrapper[4835]: I0201 07:38:10.275032 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" Feb 01 07:38:10 crc kubenswrapper[4835]: I0201 07:38:10.542510 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r"] Feb 01 07:38:10 crc kubenswrapper[4835]: I0201 07:38:10.922684 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" event={"ID":"26de1ab5-eb0d-4fe4-83ad-25f2262bd958","Type":"ContainerStarted","Data":"13a8d88350d4c01f9f1a85d724fb28342b65da1d8600ee4a5441f680d10bc42f"} Feb 01 07:38:12 crc kubenswrapper[4835]: I0201 07:38:12.921976 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xlq66"] Feb 01 07:38:12 crc kubenswrapper[4835]: I0201 07:38:12.924673 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:12 crc kubenswrapper[4835]: I0201 07:38:12.935846 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xlq66"] Feb 01 07:38:12 crc kubenswrapper[4835]: I0201 07:38:12.968991 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" event={"ID":"26de1ab5-eb0d-4fe4-83ad-25f2262bd958","Type":"ContainerStarted","Data":"b82f3d8afa05a0091c353c49b5d86bc1d0e51d1ce5a5ce9b648ab9e32d83eb1b"} Feb 01 07:38:12 crc kubenswrapper[4835]: I0201 07:38:12.970346 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" Feb 01 07:38:12 crc kubenswrapper[4835]: I0201 07:38:12.983924 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-catalog-content\") pod \"redhat-operators-xlq66\" (UID: \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\") " pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:12 crc kubenswrapper[4835]: I0201 07:38:12.984065 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-utilities\") pod \"redhat-operators-xlq66\" (UID: \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\") " pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:12 crc kubenswrapper[4835]: I0201 07:38:12.984143 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz8wl\" (UniqueName: \"kubernetes.io/projected/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-kube-api-access-gz8wl\") pod \"redhat-operators-xlq66\" (UID: \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\") " pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:13 crc kubenswrapper[4835]: I0201 07:38:13.005490 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" podStartSLOduration=2.397000895 podStartE2EDuration="4.005475669s" podCreationTimestamp="2026-02-01 07:38:09 +0000 UTC" firstStartedPulling="2026-02-01 07:38:10.549104767 +0000 UTC m=+963.669541211" lastFinishedPulling="2026-02-01 07:38:12.157579551 +0000 UTC m=+965.278015985" observedRunningTime="2026-02-01 07:38:12.995351315 +0000 UTC m=+966.115787799" watchObservedRunningTime="2026-02-01 07:38:13.005475669 +0000 UTC m=+966.125912103" Feb 01 07:38:13 crc kubenswrapper[4835]: I0201 07:38:13.085369 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz8wl\" (UniqueName: \"kubernetes.io/projected/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-kube-api-access-gz8wl\") pod \"redhat-operators-xlq66\" (UID: \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\") " pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:13 crc kubenswrapper[4835]: I0201 07:38:13.085520 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-catalog-content\") pod \"redhat-operators-xlq66\" (UID: \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\") " pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:13 crc kubenswrapper[4835]: I0201 07:38:13.085569 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-utilities\") pod \"redhat-operators-xlq66\" (UID: \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\") " pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:13 crc kubenswrapper[4835]: I0201 07:38:13.086055 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-utilities\") pod \"redhat-operators-xlq66\" (UID: \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\") " pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:13 crc kubenswrapper[4835]: I0201 07:38:13.086061 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-catalog-content\") pod \"redhat-operators-xlq66\" (UID: \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\") " pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:13 crc kubenswrapper[4835]: I0201 07:38:13.106258 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz8wl\" (UniqueName: \"kubernetes.io/projected/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-kube-api-access-gz8wl\") pod \"redhat-operators-xlq66\" (UID: \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\") " pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:13 crc kubenswrapper[4835]: I0201 07:38:13.296895 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:13 crc kubenswrapper[4835]: I0201 07:38:13.765980 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xlq66"] Feb 01 07:38:13 crc kubenswrapper[4835]: I0201 07:38:13.975364 4835 generic.go:334] "Generic (PLEG): container finished" podID="2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" containerID="074b7b427f3abecc72a80949ae3752e1f1c013ce713d02c038f8c2a763ae2cc2" exitCode=0 Feb 01 07:38:13 crc kubenswrapper[4835]: I0201 07:38:13.975409 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xlq66" event={"ID":"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d","Type":"ContainerDied","Data":"074b7b427f3abecc72a80949ae3752e1f1c013ce713d02c038f8c2a763ae2cc2"} Feb 01 07:38:13 crc kubenswrapper[4835]: I0201 07:38:13.975737 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xlq66" event={"ID":"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d","Type":"ContainerStarted","Data":"1f8b2e37088d7c85f0e3747b5b4f057bfc4c3678cedcd231a9d989615deb01d9"} Feb 01 07:38:14 crc kubenswrapper[4835]: I0201 07:38:14.984622 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xlq66" event={"ID":"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d","Type":"ContainerStarted","Data":"efd6d27aa99471d986be400fe860f654afc230c59f8ab263518d756dc64c1864"} Feb 01 07:38:15 crc kubenswrapper[4835]: I0201 07:38:15.992557 4835 generic.go:334] "Generic (PLEG): container finished" podID="2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" containerID="efd6d27aa99471d986be400fe860f654afc230c59f8ab263518d756dc64c1864" exitCode=0 Feb 01 07:38:15 crc kubenswrapper[4835]: I0201 07:38:15.992610 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xlq66" event={"ID":"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d","Type":"ContainerDied","Data":"efd6d27aa99471d986be400fe860f654afc230c59f8ab263518d756dc64c1864"} Feb 01 07:38:18 crc kubenswrapper[4835]: I0201 07:38:18.011466 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xlq66" event={"ID":"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d","Type":"ContainerStarted","Data":"1ba5facccd298a7a96c3ca2eb9b5d6ac7ba944b347689b88a082e975e765f27a"} Feb 01 07:38:18 crc kubenswrapper[4835]: I0201 07:38:18.053903 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xlq66" podStartSLOduration=2.563360806 podStartE2EDuration="6.053879628s" podCreationTimestamp="2026-02-01 07:38:12 +0000 UTC" firstStartedPulling="2026-02-01 07:38:13.97813113 +0000 UTC m=+967.098567564" lastFinishedPulling="2026-02-01 07:38:17.468649912 +0000 UTC m=+970.589086386" observedRunningTime="2026-02-01 07:38:18.053618582 +0000 UTC m=+971.174055056" watchObservedRunningTime="2026-02-01 07:38:18.053879628 +0000 UTC m=+971.174316092" Feb 01 07:38:20 crc kubenswrapper[4835]: I0201 07:38:20.281351 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-7b5bf4689c-j4d4r" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.598251 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.603189 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.605053 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.605061 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-conf" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.606198 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-swift-dockercfg-hwgzn" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.606381 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-files" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.625955 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.727716 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.727783 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1edd7394-0f8e-4271-8774-f228946e62f3-lock\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.727805 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt6t9\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-kube-api-access-wt6t9\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.727837 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.727953 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1edd7394-0f8e-4271-8774-f228946e62f3-cache\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.828901 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.828968 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1edd7394-0f8e-4271-8774-f228946e62f3-lock\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.829000 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt6t9\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-kube-api-access-wt6t9\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: E0201 07:38:22.829037 4835 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Feb 01 07:38:22 crc kubenswrapper[4835]: E0201 07:38:22.829056 4835 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Feb 01 07:38:22 crc kubenswrapper[4835]: E0201 07:38:22.829102 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift podName:1edd7394-0f8e-4271-8774-f228946e62f3 nodeName:}" failed. No retries permitted until 2026-02-01 07:38:23.329086461 +0000 UTC m=+976.449522895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift") pod "swift-storage-0" (UID: "1edd7394-0f8e-4271-8774-f228946e62f3") : configmap "swift-ring-files" not found Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.829053 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.829212 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1edd7394-0f8e-4271-8774-f228946e62f3-cache\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.834745 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1edd7394-0f8e-4271-8774-f228946e62f3-cache\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.836857 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1edd7394-0f8e-4271-8774-f228946e62f3-lock\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.840616 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") device mount path \"/mnt/openstack/pv10\"" pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.873881 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:22 crc kubenswrapper[4835]: I0201 07:38:22.877613 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt6t9\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-kube-api-access-wt6t9\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.297120 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.297456 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.334931 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:23 crc kubenswrapper[4835]: E0201 07:38:23.335119 4835 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Feb 01 07:38:23 crc kubenswrapper[4835]: E0201 07:38:23.335134 4835 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Feb 01 07:38:23 crc kubenswrapper[4835]: E0201 07:38:23.335200 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift podName:1edd7394-0f8e-4271-8774-f228946e62f3 nodeName:}" failed. No retries permitted until 2026-02-01 07:38:24.335169094 +0000 UTC m=+977.455605528 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift") pod "swift-storage-0" (UID: "1edd7394-0f8e-4271-8774-f228946e62f3") : configmap "swift-ring-files" not found Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.598681 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r"] Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.599957 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.603606 4835 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.610909 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r"] Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.739650 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ccb8908-ffc6-4032-8907-da7491bf9304-config-data\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.739718 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sl55\" (UniqueName: \"kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-kube-api-access-7sl55\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.740192 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccb8908-ffc6-4032-8907-da7491bf9304-run-httpd\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.740217 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccb8908-ffc6-4032-8907-da7491bf9304-log-httpd\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.740242 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-etc-swift\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.841137 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ccb8908-ffc6-4032-8907-da7491bf9304-config-data\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.841199 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sl55\" (UniqueName: \"kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-kube-api-access-7sl55\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.841276 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccb8908-ffc6-4032-8907-da7491bf9304-run-httpd\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.841294 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccb8908-ffc6-4032-8907-da7491bf9304-log-httpd\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.841312 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-etc-swift\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: E0201 07:38:23.841589 4835 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Feb 01 07:38:23 crc kubenswrapper[4835]: E0201 07:38:23.841607 4835 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r: configmap "swift-ring-files" not found Feb 01 07:38:23 crc kubenswrapper[4835]: E0201 07:38:23.841658 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-etc-swift podName:8ccb8908-ffc6-4032-8907-da7491bf9304 nodeName:}" failed. No retries permitted until 2026-02-01 07:38:24.341639387 +0000 UTC m=+977.462075821 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-etc-swift") pod "swift-proxy-7d8cf99555-6vq9r" (UID: "8ccb8908-ffc6-4032-8907-da7491bf9304") : configmap "swift-ring-files" not found Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.843046 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccb8908-ffc6-4032-8907-da7491bf9304-run-httpd\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.843140 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccb8908-ffc6-4032-8907-da7491bf9304-log-httpd\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.847065 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ccb8908-ffc6-4032-8907-da7491bf9304-config-data\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:23 crc kubenswrapper[4835]: I0201 07:38:23.858795 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sl55\" (UniqueName: \"kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-kube-api-access-7sl55\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:24 crc kubenswrapper[4835]: I0201 07:38:24.333142 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xlq66" podUID="2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" containerName="registry-server" probeResult="failure" output=< Feb 01 07:38:24 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Feb 01 07:38:24 crc kubenswrapper[4835]: > Feb 01 07:38:24 crc kubenswrapper[4835]: I0201 07:38:24.347788 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:24 crc kubenswrapper[4835]: I0201 07:38:24.347863 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-etc-swift\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:24 crc kubenswrapper[4835]: E0201 07:38:24.348001 4835 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Feb 01 07:38:24 crc kubenswrapper[4835]: E0201 07:38:24.348034 4835 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r: configmap "swift-ring-files" not found Feb 01 07:38:24 crc kubenswrapper[4835]: E0201 07:38:24.348032 4835 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Feb 01 07:38:24 crc kubenswrapper[4835]: E0201 07:38:24.348066 4835 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Feb 01 07:38:24 crc kubenswrapper[4835]: E0201 07:38:24.348092 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-etc-swift podName:8ccb8908-ffc6-4032-8907-da7491bf9304 nodeName:}" failed. No retries permitted until 2026-02-01 07:38:25.348070018 +0000 UTC m=+978.468506452 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-etc-swift") pod "swift-proxy-7d8cf99555-6vq9r" (UID: "8ccb8908-ffc6-4032-8907-da7491bf9304") : configmap "swift-ring-files" not found Feb 01 07:38:24 crc kubenswrapper[4835]: E0201 07:38:24.348128 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift podName:1edd7394-0f8e-4271-8774-f228946e62f3 nodeName:}" failed. No retries permitted until 2026-02-01 07:38:26.348106829 +0000 UTC m=+979.468543323 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift") pod "swift-storage-0" (UID: "1edd7394-0f8e-4271-8774-f228946e62f3") : configmap "swift-ring-files" not found Feb 01 07:38:25 crc kubenswrapper[4835]: I0201 07:38:25.191686 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:38:25 crc kubenswrapper[4835]: I0201 07:38:25.191751 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:38:25 crc kubenswrapper[4835]: I0201 07:38:25.191805 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:38:25 crc kubenswrapper[4835]: I0201 07:38:25.192482 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9ccb60f81487a17626bf941abb39b090063342e92bdcf8f103587fb1912c3a05"} pod="openshift-machine-config-operator/machine-config-daemon-wdt78" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 07:38:25 crc kubenswrapper[4835]: I0201 07:38:25.192545 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" containerID="cri-o://9ccb60f81487a17626bf941abb39b090063342e92bdcf8f103587fb1912c3a05" gracePeriod=600 Feb 01 07:38:25 crc kubenswrapper[4835]: I0201 07:38:25.363535 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-etc-swift\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:25 crc kubenswrapper[4835]: E0201 07:38:25.363769 4835 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Feb 01 07:38:25 crc kubenswrapper[4835]: E0201 07:38:25.363787 4835 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r: configmap "swift-ring-files" not found Feb 01 07:38:25 crc kubenswrapper[4835]: E0201 07:38:25.363852 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-etc-swift podName:8ccb8908-ffc6-4032-8907-da7491bf9304 nodeName:}" failed. No retries permitted until 2026-02-01 07:38:27.363834088 +0000 UTC m=+980.484270522 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-etc-swift") pod "swift-proxy-7d8cf99555-6vq9r" (UID: "8ccb8908-ffc6-4032-8907-da7491bf9304") : configmap "swift-ring-files" not found Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.066785 4835 generic.go:334] "Generic (PLEG): container finished" podID="303c450e-4b2d-4908-84e6-df8b444ed640" containerID="9ccb60f81487a17626bf941abb39b090063342e92bdcf8f103587fb1912c3a05" exitCode=0 Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.066867 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerDied","Data":"9ccb60f81487a17626bf941abb39b090063342e92bdcf8f103587fb1912c3a05"} Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.067207 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"19428f932c6c98ecc149a201b9cb2f965faa26b06f4629d2e4af89e8080412f3"} Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.067242 4835 scope.go:117] "RemoveContainer" containerID="6da4a09917e14a43c6af10d69dcc7ba3d2cd41146e8c294ea85744f0374d0efa" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.381148 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:26 crc kubenswrapper[4835]: E0201 07:38:26.381371 4835 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Feb 01 07:38:26 crc kubenswrapper[4835]: E0201 07:38:26.381620 4835 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Feb 01 07:38:26 crc kubenswrapper[4835]: E0201 07:38:26.381714 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift podName:1edd7394-0f8e-4271-8774-f228946e62f3 nodeName:}" failed. No retries permitted until 2026-02-01 07:38:30.381685253 +0000 UTC m=+983.502121727 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift") pod "swift-storage-0" (UID: "1edd7394-0f8e-4271-8774-f228946e62f3") : configmap "swift-ring-files" not found Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.696693 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-w2wt7"] Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.698492 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.701378 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.701787 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.729709 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-w2wt7"] Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.889671 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9ffk\" (UniqueName: \"kubernetes.io/projected/b45c05e1-195b-43c0-a44d-1d1c50886dfc-kube-api-access-k9ffk\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.889753 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.889917 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b45c05e1-195b-43c0-a44d-1d1c50886dfc-etc-swift\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.890066 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b45c05e1-195b-43c0-a44d-1d1c50886dfc-dispersionconf\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.890159 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b45c05e1-195b-43c0-a44d-1d1c50886dfc-swiftconf\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.890273 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-scripts\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.991942 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-scripts\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.992054 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9ffk\" (UniqueName: \"kubernetes.io/projected/b45c05e1-195b-43c0-a44d-1d1c50886dfc-kube-api-access-k9ffk\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.992112 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.992189 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b45c05e1-195b-43c0-a44d-1d1c50886dfc-etc-swift\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: E0201 07:38:26.992230 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.992271 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b45c05e1-195b-43c0-a44d-1d1c50886dfc-dispersionconf\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: E0201 07:38:26.992293 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:38:27.492277292 +0000 UTC m=+980.612713726 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.992358 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b45c05e1-195b-43c0-a44d-1d1c50886dfc-swiftconf\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.993029 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-scripts\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:26 crc kubenswrapper[4835]: I0201 07:38:26.993573 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b45c05e1-195b-43c0-a44d-1d1c50886dfc-etc-swift\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:27 crc kubenswrapper[4835]: I0201 07:38:27.002881 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b45c05e1-195b-43c0-a44d-1d1c50886dfc-swiftconf\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:27 crc kubenswrapper[4835]: I0201 07:38:27.018367 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b45c05e1-195b-43c0-a44d-1d1c50886dfc-dispersionconf\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:27 crc kubenswrapper[4835]: I0201 07:38:27.018775 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9ffk\" (UniqueName: \"kubernetes.io/projected/b45c05e1-195b-43c0-a44d-1d1c50886dfc-kube-api-access-k9ffk\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:27 crc kubenswrapper[4835]: I0201 07:38:27.399746 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-etc-swift\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:27 crc kubenswrapper[4835]: I0201 07:38:27.405973 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ccb8908-ffc6-4032-8907-da7491bf9304-etc-swift\") pod \"swift-proxy-7d8cf99555-6vq9r\" (UID: \"8ccb8908-ffc6-4032-8907-da7491bf9304\") " pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:27 crc kubenswrapper[4835]: I0201 07:38:27.501606 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:27 crc kubenswrapper[4835]: E0201 07:38:27.501815 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:38:27 crc kubenswrapper[4835]: E0201 07:38:27.501923 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:38:28.501896108 +0000 UTC m=+981.622332582 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:38:27 crc kubenswrapper[4835]: I0201 07:38:27.534841 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:28 crc kubenswrapper[4835]: I0201 07:38:28.518712 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:28 crc kubenswrapper[4835]: E0201 07:38:28.518920 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:38:28 crc kubenswrapper[4835]: E0201 07:38:28.519395 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:38:30.519370683 +0000 UTC m=+983.639807207 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:38:28 crc kubenswrapper[4835]: I0201 07:38:28.599220 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r"] Feb 01 07:38:28 crc kubenswrapper[4835]: W0201 07:38:28.610538 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ccb8908_ffc6_4032_8907_da7491bf9304.slice/crio-8908cf22a853343b93df37395aa541eabbcfc98751ced4a9119ab669313c07d7 WatchSource:0}: Error finding container 8908cf22a853343b93df37395aa541eabbcfc98751ced4a9119ab669313c07d7: Status 404 returned error can't find the container with id 8908cf22a853343b93df37395aa541eabbcfc98751ced4a9119ab669313c07d7 Feb 01 07:38:29 crc kubenswrapper[4835]: I0201 07:38:29.093403 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"8908cf22a853343b93df37395aa541eabbcfc98751ced4a9119ab669313c07d7"} Feb 01 07:38:30 crc kubenswrapper[4835]: I0201 07:38:30.457226 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:30 crc kubenswrapper[4835]: I0201 07:38:30.464179 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift\") pod \"swift-storage-0\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:30 crc kubenswrapper[4835]: I0201 07:38:30.559202 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:30 crc kubenswrapper[4835]: E0201 07:38:30.559393 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:38:30 crc kubenswrapper[4835]: E0201 07:38:30.559511 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:38:34.559481087 +0000 UTC m=+987.679917541 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:38:30 crc kubenswrapper[4835]: I0201 07:38:30.719165 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:38:32 crc kubenswrapper[4835]: I0201 07:38:32.115491 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"95517c83908e8e06df5319306b204bf523fb0839c1f428a4cd25e36acc6805d7"} Feb 01 07:38:32 crc kubenswrapper[4835]: I0201 07:38:32.235750 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Feb 01 07:38:32 crc kubenswrapper[4835]: W0201 07:38:32.242585 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1edd7394_0f8e_4271_8774_f228946e62f3.slice/crio-965930581ebfe6a06bce16c42d9dbc0702e4b9210c5c9c9057f64d28fcd26803 WatchSource:0}: Error finding container 965930581ebfe6a06bce16c42d9dbc0702e4b9210c5c9c9057f64d28fcd26803: Status 404 returned error can't find the container with id 965930581ebfe6a06bce16c42d9dbc0702e4b9210c5c9c9057f64d28fcd26803 Feb 01 07:38:33 crc kubenswrapper[4835]: I0201 07:38:33.123554 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"965930581ebfe6a06bce16c42d9dbc0702e4b9210c5c9c9057f64d28fcd26803"} Feb 01 07:38:33 crc kubenswrapper[4835]: I0201 07:38:33.126419 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="25cca2f3f5f0ca4235e68b5a9b94250ec3bd171877b74e1618d32e349210087f" exitCode=1 Feb 01 07:38:33 crc kubenswrapper[4835]: I0201 07:38:33.126457 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"25cca2f3f5f0ca4235e68b5a9b94250ec3bd171877b74e1618d32e349210087f"} Feb 01 07:38:33 crc kubenswrapper[4835]: I0201 07:38:33.126583 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:33 crc kubenswrapper[4835]: I0201 07:38:33.127181 4835 scope.go:117] "RemoveContainer" containerID="25cca2f3f5f0ca4235e68b5a9b94250ec3bd171877b74e1618d32e349210087f" Feb 01 07:38:33 crc kubenswrapper[4835]: I0201 07:38:33.416761 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:33 crc kubenswrapper[4835]: I0201 07:38:33.467474 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:33 crc kubenswrapper[4835]: I0201 07:38:33.535522 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:34 crc kubenswrapper[4835]: I0201 07:38:34.136084 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"8440c1f4d614f0c8dcf201ce925fbc74b3533dc622fe9d31ba340383b5b94399"} Feb 01 07:38:34 crc kubenswrapper[4835]: I0201 07:38:34.136454 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:34 crc kubenswrapper[4835]: I0201 07:38:34.138299 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="0aa899c6c8fea3baf53158221f585cfd84d23b944687209ecc3c91475a6c13e1" exitCode=1 Feb 01 07:38:34 crc kubenswrapper[4835]: I0201 07:38:34.138401 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"c9e3d55dd0fa17eedf107eb2b3e5dac364ff8077e8a1d4e0d9016998e9e14b2a"} Feb 01 07:38:34 crc kubenswrapper[4835]: I0201 07:38:34.138480 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"c677208601eec0c0fae2c620f112d3a005a89800a130f6a2742cfc65c7caf407"} Feb 01 07:38:34 crc kubenswrapper[4835]: I0201 07:38:34.138501 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"0aa899c6c8fea3baf53158221f585cfd84d23b944687209ecc3c91475a6c13e1"} Feb 01 07:38:34 crc kubenswrapper[4835]: I0201 07:38:34.138517 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"abaae4399d0309909ee61f1119476fc6ca124d2a5861328d8b9f177c3ee8d541"} Feb 01 07:38:34 crc kubenswrapper[4835]: I0201 07:38:34.160824 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podStartSLOduration=7.972956376 podStartE2EDuration="11.16080586s" podCreationTimestamp="2026-02-01 07:38:23 +0000 UTC" firstStartedPulling="2026-02-01 07:38:28.61513586 +0000 UTC m=+981.735572334" lastFinishedPulling="2026-02-01 07:38:31.802985374 +0000 UTC m=+984.923421818" observedRunningTime="2026-02-01 07:38:34.154386331 +0000 UTC m=+987.274822785" watchObservedRunningTime="2026-02-01 07:38:34.16080586 +0000 UTC m=+987.281242304" Feb 01 07:38:34 crc kubenswrapper[4835]: I0201 07:38:34.660952 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:34 crc kubenswrapper[4835]: E0201 07:38:34.661224 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:38:34 crc kubenswrapper[4835]: E0201 07:38:34.661574 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:38:42.661542212 +0000 UTC m=+995.781978676 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:38:35 crc kubenswrapper[4835]: I0201 07:38:35.152781 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="8440c1f4d614f0c8dcf201ce925fbc74b3533dc622fe9d31ba340383b5b94399" exitCode=1 Feb 01 07:38:35 crc kubenswrapper[4835]: I0201 07:38:35.152832 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"8440c1f4d614f0c8dcf201ce925fbc74b3533dc622fe9d31ba340383b5b94399"} Feb 01 07:38:35 crc kubenswrapper[4835]: I0201 07:38:35.152883 4835 scope.go:117] "RemoveContainer" containerID="25cca2f3f5f0ca4235e68b5a9b94250ec3bd171877b74e1618d32e349210087f" Feb 01 07:38:35 crc kubenswrapper[4835]: I0201 07:38:35.158045 4835 scope.go:117] "RemoveContainer" containerID="8440c1f4d614f0c8dcf201ce925fbc74b3533dc622fe9d31ba340383b5b94399" Feb 01 07:38:35 crc kubenswrapper[4835]: E0201 07:38:35.164026 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:38:36 crc kubenswrapper[4835]: I0201 07:38:36.167066 4835 scope.go:117] "RemoveContainer" containerID="8440c1f4d614f0c8dcf201ce925fbc74b3533dc622fe9d31ba340383b5b94399" Feb 01 07:38:36 crc kubenswrapper[4835]: E0201 07:38:36.167331 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:38:36 crc kubenswrapper[4835]: I0201 07:38:36.176168 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="99dd4e1721ebaf1dd026d0a1154a6d27d931d29c79ee7f9d577ac388cfe1e0bd" exitCode=1 Feb 01 07:38:36 crc kubenswrapper[4835]: I0201 07:38:36.176218 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"0a25ac97b5294b86a329b0b8a00b6a7ec519f70771d4bc4890be6a3eaa416540"} Feb 01 07:38:36 crc kubenswrapper[4835]: I0201 07:38:36.176260 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"3f92566bd67947d9babfc2464c78a74c7f787b215d8cc4f97cb5e94b3c298f10"} Feb 01 07:38:36 crc kubenswrapper[4835]: I0201 07:38:36.176275 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"99dd4e1721ebaf1dd026d0a1154a6d27d931d29c79ee7f9d577ac388cfe1e0bd"} Feb 01 07:38:36 crc kubenswrapper[4835]: I0201 07:38:36.176291 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"eb8a3ffd071b9c2b3f1584e981522df172dcb88a198689e7934e8735ecf4b50a"} Feb 01 07:38:36 crc kubenswrapper[4835]: I0201 07:38:36.535197 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:36 crc kubenswrapper[4835]: I0201 07:38:36.917926 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xlq66"] Feb 01 07:38:36 crc kubenswrapper[4835]: I0201 07:38:36.918633 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xlq66" podUID="2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" containerName="registry-server" containerID="cri-o://1ba5facccd298a7a96c3ca2eb9b5d6ac7ba944b347689b88a082e975e765f27a" gracePeriod=2 Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.205762 4835 generic.go:334] "Generic (PLEG): container finished" podID="2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" containerID="1ba5facccd298a7a96c3ca2eb9b5d6ac7ba944b347689b88a082e975e765f27a" exitCode=0 Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.206258 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xlq66" event={"ID":"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d","Type":"ContainerDied","Data":"1ba5facccd298a7a96c3ca2eb9b5d6ac7ba944b347689b88a082e975e765f27a"} Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.219572 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"e1ae71b74256ecedefc7fbf253c43d8171b47774a342cb3954c7d0625c83ceb4"} Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.219873 4835 scope.go:117] "RemoveContainer" containerID="8440c1f4d614f0c8dcf201ce925fbc74b3533dc622fe9d31ba340383b5b94399" Feb 01 07:38:37 crc kubenswrapper[4835]: E0201 07:38:37.220169 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.229590 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.381533 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.419890 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz8wl\" (UniqueName: \"kubernetes.io/projected/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-kube-api-access-gz8wl\") pod \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\" (UID: \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\") " Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.420032 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-catalog-content\") pod \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\" (UID: \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\") " Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.420065 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-utilities\") pod \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\" (UID: \"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d\") " Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.421719 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-utilities" (OuterVolumeSpecName: "utilities") pod "2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" (UID: "2b9e4f72-eb97-434b-aba4-ebf37ef1f51d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.428685 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-kube-api-access-gz8wl" (OuterVolumeSpecName: "kube-api-access-gz8wl") pod "2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" (UID: "2b9e4f72-eb97-434b-aba4-ebf37ef1f51d"). InnerVolumeSpecName "kube-api-access-gz8wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.521907 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.521950 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gz8wl\" (UniqueName: \"kubernetes.io/projected/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-kube-api-access-gz8wl\") on node \"crc\" DevicePath \"\"" Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.538321 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.599181 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" (UID: "2b9e4f72-eb97-434b-aba4-ebf37ef1f51d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:38:37 crc kubenswrapper[4835]: I0201 07:38:37.627324 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.228301 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xlq66" event={"ID":"2b9e4f72-eb97-434b-aba4-ebf37ef1f51d","Type":"ContainerDied","Data":"1f8b2e37088d7c85f0e3747b5b4f057bfc4c3678cedcd231a9d989615deb01d9"} Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.228726 4835 scope.go:117] "RemoveContainer" containerID="1ba5facccd298a7a96c3ca2eb9b5d6ac7ba944b347689b88a082e975e765f27a" Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.228367 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xlq66" Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.237733 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"c2bb2c50979d81b48db3da8d1503421df516cf45c6cb8eddcab8d29e7b89e40b"} Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.237887 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"1244aa8579be5d9284ebc00671702c6922c1ee0c32324cc3fb026ab5c3634876"} Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.237979 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"f70585ca73d7397e897eb3941142c52d65b1003b4040f8c826ddc548b6f8f0d4"} Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.238065 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"ce9f6e51f49479167482c65a57955f65790012dea41865e75c987db5f30a8585"} Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.238141 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"115bbc64e704d41ae4244ee3df9b13e55015920e53f212f314acf31071b2bf14"} Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.238199 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"57f650c2bf61220733002708c6de1b1f0b9bedf1608f819556e91bcbf73a479c"} Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.238678 4835 scope.go:117] "RemoveContainer" containerID="0aa899c6c8fea3baf53158221f585cfd84d23b944687209ecc3c91475a6c13e1" Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.238825 4835 scope.go:117] "RemoveContainer" containerID="99dd4e1721ebaf1dd026d0a1154a6d27d931d29c79ee7f9d577ac388cfe1e0bd" Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.258696 4835 scope.go:117] "RemoveContainer" containerID="efd6d27aa99471d986be400fe860f654afc230c59f8ab263518d756dc64c1864" Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.305758 4835 scope.go:117] "RemoveContainer" containerID="074b7b427f3abecc72a80949ae3752e1f1c013ce713d02c038f8c2a763ae2cc2" Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.313063 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xlq66"] Feb 01 07:38:38 crc kubenswrapper[4835]: I0201 07:38:38.319767 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xlq66"] Feb 01 07:38:39 crc kubenswrapper[4835]: I0201 07:38:39.258394 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="f70585ca73d7397e897eb3941142c52d65b1003b4040f8c826ddc548b6f8f0d4" exitCode=1 Feb 01 07:38:39 crc kubenswrapper[4835]: I0201 07:38:39.258939 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="4a9287bfcaa5f80b4ec063a847130b17b81b072e86f81410aa5a32857dbeafea" exitCode=1 Feb 01 07:38:39 crc kubenswrapper[4835]: I0201 07:38:39.258965 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="b9f112558ad9d682c284122a1e91ab89674b43f70476f759a2b6e95183c6e5ad" exitCode=1 Feb 01 07:38:39 crc kubenswrapper[4835]: I0201 07:38:39.258634 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"f70585ca73d7397e897eb3941142c52d65b1003b4040f8c826ddc548b6f8f0d4"} Feb 01 07:38:39 crc kubenswrapper[4835]: I0201 07:38:39.259043 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"4a9287bfcaa5f80b4ec063a847130b17b81b072e86f81410aa5a32857dbeafea"} Feb 01 07:38:39 crc kubenswrapper[4835]: I0201 07:38:39.259067 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"b9f112558ad9d682c284122a1e91ab89674b43f70476f759a2b6e95183c6e5ad"} Feb 01 07:38:39 crc kubenswrapper[4835]: I0201 07:38:39.259093 4835 scope.go:117] "RemoveContainer" containerID="99dd4e1721ebaf1dd026d0a1154a6d27d931d29c79ee7f9d577ac388cfe1e0bd" Feb 01 07:38:39 crc kubenswrapper[4835]: I0201 07:38:39.259325 4835 scope.go:117] "RemoveContainer" containerID="b9f112558ad9d682c284122a1e91ab89674b43f70476f759a2b6e95183c6e5ad" Feb 01 07:38:39 crc kubenswrapper[4835]: I0201 07:38:39.259495 4835 scope.go:117] "RemoveContainer" containerID="4a9287bfcaa5f80b4ec063a847130b17b81b072e86f81410aa5a32857dbeafea" Feb 01 07:38:39 crc kubenswrapper[4835]: I0201 07:38:39.259711 4835 scope.go:117] "RemoveContainer" containerID="f70585ca73d7397e897eb3941142c52d65b1003b4040f8c826ddc548b6f8f0d4" Feb 01 07:38:39 crc kubenswrapper[4835]: I0201 07:38:39.334148 4835 scope.go:117] "RemoveContainer" containerID="0aa899c6c8fea3baf53158221f585cfd84d23b944687209ecc3c91475a6c13e1" Feb 01 07:38:39 crc kubenswrapper[4835]: I0201 07:38:39.537360 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:38:39 crc kubenswrapper[4835]: E0201 07:38:39.549049 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:38:39 crc kubenswrapper[4835]: I0201 07:38:39.578968 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" path="/var/lib/kubelet/pods/2b9e4f72-eb97-434b-aba4-ebf37ef1f51d/volumes" Feb 01 07:38:40 crc kubenswrapper[4835]: I0201 07:38:40.282006 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="d341c0bbafe56f527a2f6fcc455b31be1cec8017e5dce1f395522342e36a57bd" exitCode=1 Feb 01 07:38:40 crc kubenswrapper[4835]: I0201 07:38:40.282078 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"d341c0bbafe56f527a2f6fcc455b31be1cec8017e5dce1f395522342e36a57bd"} Feb 01 07:38:40 crc kubenswrapper[4835]: I0201 07:38:40.282126 4835 scope.go:117] "RemoveContainer" containerID="f70585ca73d7397e897eb3941142c52d65b1003b4040f8c826ddc548b6f8f0d4" Feb 01 07:38:40 crc kubenswrapper[4835]: I0201 07:38:40.283232 4835 scope.go:117] "RemoveContainer" containerID="b9f112558ad9d682c284122a1e91ab89674b43f70476f759a2b6e95183c6e5ad" Feb 01 07:38:40 crc kubenswrapper[4835]: I0201 07:38:40.283399 4835 scope.go:117] "RemoveContainer" containerID="4a9287bfcaa5f80b4ec063a847130b17b81b072e86f81410aa5a32857dbeafea" Feb 01 07:38:40 crc kubenswrapper[4835]: I0201 07:38:40.283708 4835 scope.go:117] "RemoveContainer" containerID="d341c0bbafe56f527a2f6fcc455b31be1cec8017e5dce1f395522342e36a57bd" Feb 01 07:38:40 crc kubenswrapper[4835]: E0201 07:38:40.284581 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:38:41 crc kubenswrapper[4835]: I0201 07:38:41.316722 4835 scope.go:117] "RemoveContainer" containerID="b9f112558ad9d682c284122a1e91ab89674b43f70476f759a2b6e95183c6e5ad" Feb 01 07:38:41 crc kubenswrapper[4835]: I0201 07:38:41.317232 4835 scope.go:117] "RemoveContainer" containerID="4a9287bfcaa5f80b4ec063a847130b17b81b072e86f81410aa5a32857dbeafea" Feb 01 07:38:41 crc kubenswrapper[4835]: I0201 07:38:41.317476 4835 scope.go:117] "RemoveContainer" containerID="d341c0bbafe56f527a2f6fcc455b31be1cec8017e5dce1f395522342e36a57bd" Feb 01 07:38:41 crc kubenswrapper[4835]: E0201 07:38:41.317945 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:38:42 crc kubenswrapper[4835]: I0201 07:38:42.536733 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:38:42 crc kubenswrapper[4835]: I0201 07:38:42.539531 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:38:42 crc kubenswrapper[4835]: I0201 07:38:42.703322 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:42 crc kubenswrapper[4835]: E0201 07:38:42.703562 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:38:42 crc kubenswrapper[4835]: E0201 07:38:42.703663 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:38:58.703640203 +0000 UTC m=+1011.824076677 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:38:44 crc kubenswrapper[4835]: I0201 07:38:44.353481 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="ce9f6e51f49479167482c65a57955f65790012dea41865e75c987db5f30a8585" exitCode=1 Feb 01 07:38:44 crc kubenswrapper[4835]: I0201 07:38:44.353711 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"ce9f6e51f49479167482c65a57955f65790012dea41865e75c987db5f30a8585"} Feb 01 07:38:44 crc kubenswrapper[4835]: I0201 07:38:44.354702 4835 scope.go:117] "RemoveContainer" containerID="b9f112558ad9d682c284122a1e91ab89674b43f70476f759a2b6e95183c6e5ad" Feb 01 07:38:44 crc kubenswrapper[4835]: I0201 07:38:44.354795 4835 scope.go:117] "RemoveContainer" containerID="4a9287bfcaa5f80b4ec063a847130b17b81b072e86f81410aa5a32857dbeafea" Feb 01 07:38:44 crc kubenswrapper[4835]: I0201 07:38:44.354895 4835 scope.go:117] "RemoveContainer" containerID="ce9f6e51f49479167482c65a57955f65790012dea41865e75c987db5f30a8585" Feb 01 07:38:44 crc kubenswrapper[4835]: I0201 07:38:44.354917 4835 scope.go:117] "RemoveContainer" containerID="d341c0bbafe56f527a2f6fcc455b31be1cec8017e5dce1f395522342e36a57bd" Feb 01 07:38:44 crc kubenswrapper[4835]: E0201 07:38:44.601401 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:38:45 crc kubenswrapper[4835]: I0201 07:38:45.371859 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"feb2c5663f63accc851097dd3f05b8f4f19e67efe2c719e8d3a4538c5779d9f1"} Feb 01 07:38:45 crc kubenswrapper[4835]: I0201 07:38:45.374018 4835 scope.go:117] "RemoveContainer" containerID="b9f112558ad9d682c284122a1e91ab89674b43f70476f759a2b6e95183c6e5ad" Feb 01 07:38:45 crc kubenswrapper[4835]: I0201 07:38:45.374201 4835 scope.go:117] "RemoveContainer" containerID="4a9287bfcaa5f80b4ec063a847130b17b81b072e86f81410aa5a32857dbeafea" Feb 01 07:38:45 crc kubenswrapper[4835]: I0201 07:38:45.374400 4835 scope.go:117] "RemoveContainer" containerID="d341c0bbafe56f527a2f6fcc455b31be1cec8017e5dce1f395522342e36a57bd" Feb 01 07:38:45 crc kubenswrapper[4835]: E0201 07:38:45.374816 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:38:45 crc kubenswrapper[4835]: I0201 07:38:45.538460 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:38:45 crc kubenswrapper[4835]: I0201 07:38:45.538571 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:45 crc kubenswrapper[4835]: I0201 07:38:45.539539 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"95517c83908e8e06df5319306b204bf523fb0839c1f428a4cd25e36acc6805d7"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:38:45 crc kubenswrapper[4835]: I0201 07:38:45.539583 4835 scope.go:117] "RemoveContainer" containerID="8440c1f4d614f0c8dcf201ce925fbc74b3533dc622fe9d31ba340383b5b94399" Feb 01 07:38:45 crc kubenswrapper[4835]: I0201 07:38:45.539634 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://95517c83908e8e06df5319306b204bf523fb0839c1f428a4cd25e36acc6805d7" gracePeriod=30 Feb 01 07:38:45 crc kubenswrapper[4835]: I0201 07:38:45.541122 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:38:46 crc kubenswrapper[4835]: I0201 07:38:46.380567 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="95517c83908e8e06df5319306b204bf523fb0839c1f428a4cd25e36acc6805d7" exitCode=0 Feb 01 07:38:46 crc kubenswrapper[4835]: I0201 07:38:46.381142 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"95517c83908e8e06df5319306b204bf523fb0839c1f428a4cd25e36acc6805d7"} Feb 01 07:38:46 crc kubenswrapper[4835]: I0201 07:38:46.381222 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"72e23eb3fd4c06d3121c6bc6be3d1d1150bf0540c81b065f65be321c24207c12"} Feb 01 07:38:46 crc kubenswrapper[4835]: I0201 07:38:46.381292 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"75d0c7a0859d358275bbd9cd41f9c9912bc0c5d0048e4cc77e453810a0147a9c"} Feb 01 07:38:46 crc kubenswrapper[4835]: I0201 07:38:46.382166 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:46 crc kubenswrapper[4835]: I0201 07:38:46.382308 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:47 crc kubenswrapper[4835]: I0201 07:38:47.397637 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="72e23eb3fd4c06d3121c6bc6be3d1d1150bf0540c81b065f65be321c24207c12" exitCode=1 Feb 01 07:38:47 crc kubenswrapper[4835]: I0201 07:38:47.397703 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"72e23eb3fd4c06d3121c6bc6be3d1d1150bf0540c81b065f65be321c24207c12"} Feb 01 07:38:47 crc kubenswrapper[4835]: I0201 07:38:47.398147 4835 scope.go:117] "RemoveContainer" containerID="8440c1f4d614f0c8dcf201ce925fbc74b3533dc622fe9d31ba340383b5b94399" Feb 01 07:38:47 crc kubenswrapper[4835]: I0201 07:38:47.398841 4835 scope.go:117] "RemoveContainer" containerID="72e23eb3fd4c06d3121c6bc6be3d1d1150bf0540c81b065f65be321c24207c12" Feb 01 07:38:47 crc kubenswrapper[4835]: E0201 07:38:47.399319 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:38:48 crc kubenswrapper[4835]: I0201 07:38:48.420621 4835 scope.go:117] "RemoveContainer" containerID="72e23eb3fd4c06d3121c6bc6be3d1d1150bf0540c81b065f65be321c24207c12" Feb 01 07:38:48 crc kubenswrapper[4835]: E0201 07:38:48.421170 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:38:48 crc kubenswrapper[4835]: I0201 07:38:48.535869 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:49 crc kubenswrapper[4835]: I0201 07:38:49.430722 4835 scope.go:117] "RemoveContainer" containerID="72e23eb3fd4c06d3121c6bc6be3d1d1150bf0540c81b065f65be321c24207c12" Feb 01 07:38:49 crc kubenswrapper[4835]: E0201 07:38:49.431564 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:38:51 crc kubenswrapper[4835]: I0201 07:38:51.539995 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:38:52 crc kubenswrapper[4835]: I0201 07:38:52.536761 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:38:54 crc kubenswrapper[4835]: I0201 07:38:54.537683 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:38:55 crc kubenswrapper[4835]: I0201 07:38:55.567243 4835 scope.go:117] "RemoveContainer" containerID="b9f112558ad9d682c284122a1e91ab89674b43f70476f759a2b6e95183c6e5ad" Feb 01 07:38:55 crc kubenswrapper[4835]: I0201 07:38:55.567326 4835 scope.go:117] "RemoveContainer" containerID="4a9287bfcaa5f80b4ec063a847130b17b81b072e86f81410aa5a32857dbeafea" Feb 01 07:38:55 crc kubenswrapper[4835]: I0201 07:38:55.567480 4835 scope.go:117] "RemoveContainer" containerID="d341c0bbafe56f527a2f6fcc455b31be1cec8017e5dce1f395522342e36a57bd" Feb 01 07:38:56 crc kubenswrapper[4835]: I0201 07:38:56.502811 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="3c25912e774b4a018588b67eec51d3f705636a69f2e60b464c915225815cf0b0" exitCode=1 Feb 01 07:38:56 crc kubenswrapper[4835]: I0201 07:38:56.503697 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="11e88e751977370741f6b5e960b76603831e02e5e523e8af6f09b7da2bb588cf" exitCode=1 Feb 01 07:38:56 crc kubenswrapper[4835]: I0201 07:38:56.503126 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"e21fb413506e5d62cd5f2d7cf365fc8dc7c34b194da431855832871b91a3eb11"} Feb 01 07:38:56 crc kubenswrapper[4835]: I0201 07:38:56.503801 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"3c25912e774b4a018588b67eec51d3f705636a69f2e60b464c915225815cf0b0"} Feb 01 07:38:56 crc kubenswrapper[4835]: I0201 07:38:56.503833 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"11e88e751977370741f6b5e960b76603831e02e5e523e8af6f09b7da2bb588cf"} Feb 01 07:38:56 crc kubenswrapper[4835]: I0201 07:38:56.503865 4835 scope.go:117] "RemoveContainer" containerID="4a9287bfcaa5f80b4ec063a847130b17b81b072e86f81410aa5a32857dbeafea" Feb 01 07:38:56 crc kubenswrapper[4835]: I0201 07:38:56.504979 4835 scope.go:117] "RemoveContainer" containerID="11e88e751977370741f6b5e960b76603831e02e5e523e8af6f09b7da2bb588cf" Feb 01 07:38:56 crc kubenswrapper[4835]: I0201 07:38:56.505207 4835 scope.go:117] "RemoveContainer" containerID="3c25912e774b4a018588b67eec51d3f705636a69f2e60b464c915225815cf0b0" Feb 01 07:38:56 crc kubenswrapper[4835]: E0201 07:38:56.506248 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:38:56 crc kubenswrapper[4835]: I0201 07:38:56.565450 4835 scope.go:117] "RemoveContainer" containerID="b9f112558ad9d682c284122a1e91ab89674b43f70476f759a2b6e95183c6e5ad" Feb 01 07:38:57 crc kubenswrapper[4835]: I0201 07:38:57.527849 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="e21fb413506e5d62cd5f2d7cf365fc8dc7c34b194da431855832871b91a3eb11" exitCode=1 Feb 01 07:38:57 crc kubenswrapper[4835]: I0201 07:38:57.527907 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"e21fb413506e5d62cd5f2d7cf365fc8dc7c34b194da431855832871b91a3eb11"} Feb 01 07:38:57 crc kubenswrapper[4835]: I0201 07:38:57.527971 4835 scope.go:117] "RemoveContainer" containerID="d341c0bbafe56f527a2f6fcc455b31be1cec8017e5dce1f395522342e36a57bd" Feb 01 07:38:57 crc kubenswrapper[4835]: I0201 07:38:57.529220 4835 scope.go:117] "RemoveContainer" containerID="11e88e751977370741f6b5e960b76603831e02e5e523e8af6f09b7da2bb588cf" Feb 01 07:38:57 crc kubenswrapper[4835]: I0201 07:38:57.529361 4835 scope.go:117] "RemoveContainer" containerID="3c25912e774b4a018588b67eec51d3f705636a69f2e60b464c915225815cf0b0" Feb 01 07:38:57 crc kubenswrapper[4835]: I0201 07:38:57.529594 4835 scope.go:117] "RemoveContainer" containerID="e21fb413506e5d62cd5f2d7cf365fc8dc7c34b194da431855832871b91a3eb11" Feb 01 07:38:57 crc kubenswrapper[4835]: E0201 07:38:57.529930 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:38:57 crc kubenswrapper[4835]: I0201 07:38:57.537637 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:38:57 crc kubenswrapper[4835]: I0201 07:38:57.537915 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:38:57 crc kubenswrapper[4835]: I0201 07:38:57.537964 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:57 crc kubenswrapper[4835]: I0201 07:38:57.538646 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"75d0c7a0859d358275bbd9cd41f9c9912bc0c5d0048e4cc77e453810a0147a9c"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:38:57 crc kubenswrapper[4835]: I0201 07:38:57.538669 4835 scope.go:117] "RemoveContainer" containerID="72e23eb3fd4c06d3121c6bc6be3d1d1150bf0540c81b065f65be321c24207c12" Feb 01 07:38:57 crc kubenswrapper[4835]: I0201 07:38:57.538695 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://75d0c7a0859d358275bbd9cd41f9c9912bc0c5d0048e4cc77e453810a0147a9c" gracePeriod=30 Feb 01 07:38:57 crc kubenswrapper[4835]: I0201 07:38:57.540018 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:38:57 crc kubenswrapper[4835]: E0201 07:38:57.860575 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:38:58 crc kubenswrapper[4835]: I0201 07:38:58.553376 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="75d0c7a0859d358275bbd9cd41f9c9912bc0c5d0048e4cc77e453810a0147a9c" exitCode=0 Feb 01 07:38:58 crc kubenswrapper[4835]: I0201 07:38:58.553608 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"75d0c7a0859d358275bbd9cd41f9c9912bc0c5d0048e4cc77e453810a0147a9c"} Feb 01 07:38:58 crc kubenswrapper[4835]: I0201 07:38:58.553862 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"fa3f2568319ce6136ef7d36ac06dd33397f56b27f1065bb9754e7a8f9c652732"} Feb 01 07:38:58 crc kubenswrapper[4835]: I0201 07:38:58.553896 4835 scope.go:117] "RemoveContainer" containerID="95517c83908e8e06df5319306b204bf523fb0839c1f428a4cd25e36acc6805d7" Feb 01 07:38:58 crc kubenswrapper[4835]: I0201 07:38:58.554214 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:38:58 crc kubenswrapper[4835]: I0201 07:38:58.554753 4835 scope.go:117] "RemoveContainer" containerID="72e23eb3fd4c06d3121c6bc6be3d1d1150bf0540c81b065f65be321c24207c12" Feb 01 07:38:58 crc kubenswrapper[4835]: E0201 07:38:58.555067 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:38:58 crc kubenswrapper[4835]: I0201 07:38:58.766454 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:38:58 crc kubenswrapper[4835]: E0201 07:38:58.766598 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:38:58 crc kubenswrapper[4835]: E0201 07:38:58.767090 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:39:30.767067401 +0000 UTC m=+1043.887503835 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:38:59 crc kubenswrapper[4835]: I0201 07:38:59.565698 4835 scope.go:117] "RemoveContainer" containerID="72e23eb3fd4c06d3121c6bc6be3d1d1150bf0540c81b065f65be321c24207c12" Feb 01 07:38:59 crc kubenswrapper[4835]: E0201 07:38:59.565912 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:39:02 crc kubenswrapper[4835]: I0201 07:39:02.539771 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:03 crc kubenswrapper[4835]: I0201 07:39:03.538059 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:06 crc kubenswrapper[4835]: I0201 07:39:06.539725 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:07 crc kubenswrapper[4835]: I0201 07:39:07.537590 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:09 crc kubenswrapper[4835]: I0201 07:39:09.537649 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:09 crc kubenswrapper[4835]: I0201 07:39:09.538350 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:39:09 crc kubenswrapper[4835]: I0201 07:39:09.539059 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"fa3f2568319ce6136ef7d36ac06dd33397f56b27f1065bb9754e7a8f9c652732"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:39:09 crc kubenswrapper[4835]: I0201 07:39:09.539097 4835 scope.go:117] "RemoveContainer" containerID="72e23eb3fd4c06d3121c6bc6be3d1d1150bf0540c81b065f65be321c24207c12" Feb 01 07:39:09 crc kubenswrapper[4835]: I0201 07:39:09.539135 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://fa3f2568319ce6136ef7d36ac06dd33397f56b27f1065bb9754e7a8f9c652732" gracePeriod=30 Feb 01 07:39:09 crc kubenswrapper[4835]: I0201 07:39:09.546085 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:10 crc kubenswrapper[4835]: I0201 07:39:10.685634 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="fa3f2568319ce6136ef7d36ac06dd33397f56b27f1065bb9754e7a8f9c652732" exitCode=0 Feb 01 07:39:10 crc kubenswrapper[4835]: I0201 07:39:10.687399 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"fa3f2568319ce6136ef7d36ac06dd33397f56b27f1065bb9754e7a8f9c652732"} Feb 01 07:39:10 crc kubenswrapper[4835]: I0201 07:39:10.687547 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"0a497c8712d37261cf8f1fc9f4ffb2c28448ab2e930aae28890134e14805781e"} Feb 01 07:39:10 crc kubenswrapper[4835]: I0201 07:39:10.687578 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"cec496cd92a05990404df717e665f186b5864c07a6992a24064747a173443941"} Feb 01 07:39:10 crc kubenswrapper[4835]: I0201 07:39:10.688193 4835 scope.go:117] "RemoveContainer" containerID="75d0c7a0859d358275bbd9cd41f9c9912bc0c5d0048e4cc77e453810a0147a9c" Feb 01 07:39:10 crc kubenswrapper[4835]: I0201 07:39:10.690048 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:39:10 crc kubenswrapper[4835]: I0201 07:39:10.690115 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:39:11 crc kubenswrapper[4835]: I0201 07:39:11.701981 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="0a497c8712d37261cf8f1fc9f4ffb2c28448ab2e930aae28890134e14805781e" exitCode=1 Feb 01 07:39:11 crc kubenswrapper[4835]: I0201 07:39:11.702066 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"0a497c8712d37261cf8f1fc9f4ffb2c28448ab2e930aae28890134e14805781e"} Feb 01 07:39:11 crc kubenswrapper[4835]: I0201 07:39:11.702585 4835 scope.go:117] "RemoveContainer" containerID="72e23eb3fd4c06d3121c6bc6be3d1d1150bf0540c81b065f65be321c24207c12" Feb 01 07:39:11 crc kubenswrapper[4835]: I0201 07:39:11.702776 4835 scope.go:117] "RemoveContainer" containerID="0a497c8712d37261cf8f1fc9f4ffb2c28448ab2e930aae28890134e14805781e" Feb 01 07:39:11 crc kubenswrapper[4835]: E0201 07:39:11.703103 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:39:12 crc kubenswrapper[4835]: I0201 07:39:12.535711 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:39:12 crc kubenswrapper[4835]: I0201 07:39:12.566766 4835 scope.go:117] "RemoveContainer" containerID="11e88e751977370741f6b5e960b76603831e02e5e523e8af6f09b7da2bb588cf" Feb 01 07:39:12 crc kubenswrapper[4835]: I0201 07:39:12.566845 4835 scope.go:117] "RemoveContainer" containerID="3c25912e774b4a018588b67eec51d3f705636a69f2e60b464c915225815cf0b0" Feb 01 07:39:12 crc kubenswrapper[4835]: I0201 07:39:12.567057 4835 scope.go:117] "RemoveContainer" containerID="e21fb413506e5d62cd5f2d7cf365fc8dc7c34b194da431855832871b91a3eb11" Feb 01 07:39:12 crc kubenswrapper[4835]: E0201 07:39:12.567353 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:39:12 crc kubenswrapper[4835]: I0201 07:39:12.723081 4835 scope.go:117] "RemoveContainer" containerID="0a497c8712d37261cf8f1fc9f4ffb2c28448ab2e930aae28890134e14805781e" Feb 01 07:39:12 crc kubenswrapper[4835]: E0201 07:39:12.723259 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:39:13 crc kubenswrapper[4835]: I0201 07:39:13.731178 4835 scope.go:117] "RemoveContainer" containerID="0a497c8712d37261cf8f1fc9f4ffb2c28448ab2e930aae28890134e14805781e" Feb 01 07:39:13 crc kubenswrapper[4835]: E0201 07:39:13.731496 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:39:15 crc kubenswrapper[4835]: I0201 07:39:15.538265 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:17 crc kubenswrapper[4835]: I0201 07:39:17.538571 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:18 crc kubenswrapper[4835]: I0201 07:39:18.537217 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:21 crc kubenswrapper[4835]: I0201 07:39:21.538127 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:21 crc kubenswrapper[4835]: I0201 07:39:21.538595 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:39:21 crc kubenswrapper[4835]: I0201 07:39:21.539523 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"cec496cd92a05990404df717e665f186b5864c07a6992a24064747a173443941"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:39:21 crc kubenswrapper[4835]: I0201 07:39:21.539566 4835 scope.go:117] "RemoveContainer" containerID="0a497c8712d37261cf8f1fc9f4ffb2c28448ab2e930aae28890134e14805781e" Feb 01 07:39:21 crc kubenswrapper[4835]: I0201 07:39:21.539616 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://cec496cd92a05990404df717e665f186b5864c07a6992a24064747a173443941" gracePeriod=30 Feb 01 07:39:21 crc kubenswrapper[4835]: I0201 07:39:21.541264 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:21 crc kubenswrapper[4835]: E0201 07:39:21.659994 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:39:21 crc kubenswrapper[4835]: I0201 07:39:21.798989 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="cec496cd92a05990404df717e665f186b5864c07a6992a24064747a173443941" exitCode=0 Feb 01 07:39:21 crc kubenswrapper[4835]: I0201 07:39:21.799071 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"cec496cd92a05990404df717e665f186b5864c07a6992a24064747a173443941"} Feb 01 07:39:21 crc kubenswrapper[4835]: I0201 07:39:21.799276 4835 scope.go:117] "RemoveContainer" containerID="fa3f2568319ce6136ef7d36ac06dd33397f56b27f1065bb9754e7a8f9c652732" Feb 01 07:39:21 crc kubenswrapper[4835]: I0201 07:39:21.799851 4835 scope.go:117] "RemoveContainer" containerID="cec496cd92a05990404df717e665f186b5864c07a6992a24064747a173443941" Feb 01 07:39:21 crc kubenswrapper[4835]: I0201 07:39:21.799888 4835 scope.go:117] "RemoveContainer" containerID="0a497c8712d37261cf8f1fc9f4ffb2c28448ab2e930aae28890134e14805781e" Feb 01 07:39:21 crc kubenswrapper[4835]: E0201 07:39:21.800122 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:39:26 crc kubenswrapper[4835]: I0201 07:39:26.568600 4835 scope.go:117] "RemoveContainer" containerID="11e88e751977370741f6b5e960b76603831e02e5e523e8af6f09b7da2bb588cf" Feb 01 07:39:26 crc kubenswrapper[4835]: I0201 07:39:26.569878 4835 scope.go:117] "RemoveContainer" containerID="3c25912e774b4a018588b67eec51d3f705636a69f2e60b464c915225815cf0b0" Feb 01 07:39:26 crc kubenswrapper[4835]: I0201 07:39:26.570125 4835 scope.go:117] "RemoveContainer" containerID="e21fb413506e5d62cd5f2d7cf365fc8dc7c34b194da431855832871b91a3eb11" Feb 01 07:39:26 crc kubenswrapper[4835]: I0201 07:39:26.886777 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"651627147b77ab4d732fd0bc91f5ae77cfe8b2e3dbb977dff79987b3679cfd17"} Feb 01 07:39:27 crc kubenswrapper[4835]: I0201 07:39:27.918261 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="651627147b77ab4d732fd0bc91f5ae77cfe8b2e3dbb977dff79987b3679cfd17" exitCode=1 Feb 01 07:39:27 crc kubenswrapper[4835]: I0201 07:39:27.918317 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="0a25ac97b5294b86a329b0b8a00b6a7ec519f70771d4bc4890be6a3eaa416540" exitCode=1 Feb 01 07:39:27 crc kubenswrapper[4835]: I0201 07:39:27.918326 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="5470c3d8bb06e025047521e30bf183ef333f14764128da9dc890913bbb199e2c" exitCode=1 Feb 01 07:39:27 crc kubenswrapper[4835]: I0201 07:39:27.918386 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"651627147b77ab4d732fd0bc91f5ae77cfe8b2e3dbb977dff79987b3679cfd17"} Feb 01 07:39:27 crc kubenswrapper[4835]: I0201 07:39:27.918508 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"0a25ac97b5294b86a329b0b8a00b6a7ec519f70771d4bc4890be6a3eaa416540"} Feb 01 07:39:27 crc kubenswrapper[4835]: I0201 07:39:27.918538 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"ed0db9d39b522037928ecbdca43640fd1a29af7ddf7d80fc40dff0bb19506f6e"} Feb 01 07:39:27 crc kubenswrapper[4835]: I0201 07:39:27.918560 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"5470c3d8bb06e025047521e30bf183ef333f14764128da9dc890913bbb199e2c"} Feb 01 07:39:27 crc kubenswrapper[4835]: I0201 07:39:27.918597 4835 scope.go:117] "RemoveContainer" containerID="11e88e751977370741f6b5e960b76603831e02e5e523e8af6f09b7da2bb588cf" Feb 01 07:39:27 crc kubenswrapper[4835]: I0201 07:39:27.919169 4835 scope.go:117] "RemoveContainer" containerID="651627147b77ab4d732fd0bc91f5ae77cfe8b2e3dbb977dff79987b3679cfd17" Feb 01 07:39:27 crc kubenswrapper[4835]: I0201 07:39:27.919333 4835 scope.go:117] "RemoveContainer" containerID="5470c3d8bb06e025047521e30bf183ef333f14764128da9dc890913bbb199e2c" Feb 01 07:39:27 crc kubenswrapper[4835]: I0201 07:39:27.919381 4835 scope.go:117] "RemoveContainer" containerID="0a25ac97b5294b86a329b0b8a00b6a7ec519f70771d4bc4890be6a3eaa416540" Feb 01 07:39:27 crc kubenswrapper[4835]: I0201 07:39:27.983330 4835 scope.go:117] "RemoveContainer" containerID="3c25912e774b4a018588b67eec51d3f705636a69f2e60b464c915225815cf0b0" Feb 01 07:39:28 crc kubenswrapper[4835]: E0201 07:39:28.164812 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:39:28 crc kubenswrapper[4835]: I0201 07:39:28.936460 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="ed0db9d39b522037928ecbdca43640fd1a29af7ddf7d80fc40dff0bb19506f6e" exitCode=1 Feb 01 07:39:28 crc kubenswrapper[4835]: I0201 07:39:28.936524 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"ed0db9d39b522037928ecbdca43640fd1a29af7ddf7d80fc40dff0bb19506f6e"} Feb 01 07:39:28 crc kubenswrapper[4835]: I0201 07:39:28.936634 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"fa5aff8be1093aa2c10f2b4af85287d1729e836661be58a64baa1c833802045c"} Feb 01 07:39:28 crc kubenswrapper[4835]: I0201 07:39:28.936670 4835 scope.go:117] "RemoveContainer" containerID="e21fb413506e5d62cd5f2d7cf365fc8dc7c34b194da431855832871b91a3eb11" Feb 01 07:39:28 crc kubenswrapper[4835]: I0201 07:39:28.937457 4835 scope.go:117] "RemoveContainer" containerID="651627147b77ab4d732fd0bc91f5ae77cfe8b2e3dbb977dff79987b3679cfd17" Feb 01 07:39:28 crc kubenswrapper[4835]: I0201 07:39:28.937612 4835 scope.go:117] "RemoveContainer" containerID="5470c3d8bb06e025047521e30bf183ef333f14764128da9dc890913bbb199e2c" Feb 01 07:39:28 crc kubenswrapper[4835]: I0201 07:39:28.937875 4835 scope.go:117] "RemoveContainer" containerID="ed0db9d39b522037928ecbdca43640fd1a29af7ddf7d80fc40dff0bb19506f6e" Feb 01 07:39:28 crc kubenswrapper[4835]: E0201 07:39:28.938621 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:39:29 crc kubenswrapper[4835]: I0201 07:39:29.957516 4835 scope.go:117] "RemoveContainer" containerID="651627147b77ab4d732fd0bc91f5ae77cfe8b2e3dbb977dff79987b3679cfd17" Feb 01 07:39:29 crc kubenswrapper[4835]: I0201 07:39:29.957616 4835 scope.go:117] "RemoveContainer" containerID="5470c3d8bb06e025047521e30bf183ef333f14764128da9dc890913bbb199e2c" Feb 01 07:39:29 crc kubenswrapper[4835]: I0201 07:39:29.957731 4835 scope.go:117] "RemoveContainer" containerID="ed0db9d39b522037928ecbdca43640fd1a29af7ddf7d80fc40dff0bb19506f6e" Feb 01 07:39:29 crc kubenswrapper[4835]: E0201 07:39:29.958098 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:39:30 crc kubenswrapper[4835]: I0201 07:39:30.770600 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:39:30 crc kubenswrapper[4835]: E0201 07:39:30.770864 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:39:30 crc kubenswrapper[4835]: E0201 07:39:30.771268 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:40:34.771226436 +0000 UTC m=+1107.891662910 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:39:37 crc kubenswrapper[4835]: I0201 07:39:37.576757 4835 scope.go:117] "RemoveContainer" containerID="cec496cd92a05990404df717e665f186b5864c07a6992a24064747a173443941" Feb 01 07:39:37 crc kubenswrapper[4835]: I0201 07:39:37.577481 4835 scope.go:117] "RemoveContainer" containerID="0a497c8712d37261cf8f1fc9f4ffb2c28448ab2e930aae28890134e14805781e" Feb 01 07:39:37 crc kubenswrapper[4835]: E0201 07:39:37.577958 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:39:43 crc kubenswrapper[4835]: I0201 07:39:43.567969 4835 scope.go:117] "RemoveContainer" containerID="651627147b77ab4d732fd0bc91f5ae77cfe8b2e3dbb977dff79987b3679cfd17" Feb 01 07:39:43 crc kubenswrapper[4835]: I0201 07:39:43.568808 4835 scope.go:117] "RemoveContainer" containerID="5470c3d8bb06e025047521e30bf183ef333f14764128da9dc890913bbb199e2c" Feb 01 07:39:43 crc kubenswrapper[4835]: I0201 07:39:43.569018 4835 scope.go:117] "RemoveContainer" containerID="ed0db9d39b522037928ecbdca43640fd1a29af7ddf7d80fc40dff0bb19506f6e" Feb 01 07:39:43 crc kubenswrapper[4835]: E0201 07:39:43.569515 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:39:49 crc kubenswrapper[4835]: I0201 07:39:49.567288 4835 scope.go:117] "RemoveContainer" containerID="cec496cd92a05990404df717e665f186b5864c07a6992a24064747a173443941" Feb 01 07:39:49 crc kubenswrapper[4835]: I0201 07:39:49.567988 4835 scope.go:117] "RemoveContainer" containerID="0a497c8712d37261cf8f1fc9f4ffb2c28448ab2e930aae28890134e14805781e" Feb 01 07:39:49 crc kubenswrapper[4835]: E0201 07:39:49.779099 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:39:50 crc kubenswrapper[4835]: I0201 07:39:50.141065 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"3a9de78b83c8f836fae857cdb1c5fa379b1a8ba796f88b34891fed9a8325a7dc"} Feb 01 07:39:50 crc kubenswrapper[4835]: I0201 07:39:50.141453 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:39:50 crc kubenswrapper[4835]: I0201 07:39:50.141901 4835 scope.go:117] "RemoveContainer" containerID="0a497c8712d37261cf8f1fc9f4ffb2c28448ab2e930aae28890134e14805781e" Feb 01 07:39:50 crc kubenswrapper[4835]: E0201 07:39:50.142312 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:39:51 crc kubenswrapper[4835]: I0201 07:39:51.150124 4835 scope.go:117] "RemoveContainer" containerID="0a497c8712d37261cf8f1fc9f4ffb2c28448ab2e930aae28890134e14805781e" Feb 01 07:39:52 crc kubenswrapper[4835]: I0201 07:39:52.158748 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a"} Feb 01 07:39:52 crc kubenswrapper[4835]: I0201 07:39:52.159687 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:39:53 crc kubenswrapper[4835]: I0201 07:39:53.172009 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" exitCode=1 Feb 01 07:39:53 crc kubenswrapper[4835]: I0201 07:39:53.172073 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a"} Feb 01 07:39:53 crc kubenswrapper[4835]: I0201 07:39:53.172144 4835 scope.go:117] "RemoveContainer" containerID="0a497c8712d37261cf8f1fc9f4ffb2c28448ab2e930aae28890134e14805781e" Feb 01 07:39:53 crc kubenswrapper[4835]: I0201 07:39:53.172849 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:39:53 crc kubenswrapper[4835]: E0201 07:39:53.173179 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:39:54 crc kubenswrapper[4835]: I0201 07:39:54.185534 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:39:54 crc kubenswrapper[4835]: E0201 07:39:54.186501 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:39:54 crc kubenswrapper[4835]: I0201 07:39:54.189541 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:54 crc kubenswrapper[4835]: I0201 07:39:54.535903 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:39:54 crc kubenswrapper[4835]: I0201 07:39:54.538473 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:54 crc kubenswrapper[4835]: I0201 07:39:54.568498 4835 scope.go:117] "RemoveContainer" containerID="651627147b77ab4d732fd0bc91f5ae77cfe8b2e3dbb977dff79987b3679cfd17" Feb 01 07:39:54 crc kubenswrapper[4835]: I0201 07:39:54.568627 4835 scope.go:117] "RemoveContainer" containerID="5470c3d8bb06e025047521e30bf183ef333f14764128da9dc890913bbb199e2c" Feb 01 07:39:54 crc kubenswrapper[4835]: I0201 07:39:54.568805 4835 scope.go:117] "RemoveContainer" containerID="ed0db9d39b522037928ecbdca43640fd1a29af7ddf7d80fc40dff0bb19506f6e" Feb 01 07:39:54 crc kubenswrapper[4835]: E0201 07:39:54.569265 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:39:55 crc kubenswrapper[4835]: I0201 07:39:55.197911 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:39:55 crc kubenswrapper[4835]: E0201 07:39:55.198343 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:39:55 crc kubenswrapper[4835]: I0201 07:39:55.198650 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:57 crc kubenswrapper[4835]: I0201 07:39:57.538258 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:39:57 crc kubenswrapper[4835]: I0201 07:39:57.539109 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:40:00 crc kubenswrapper[4835]: I0201 07:40:00.538698 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:40:00 crc kubenswrapper[4835]: I0201 07:40:00.539511 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:40:00 crc kubenswrapper[4835]: I0201 07:40:00.541532 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"3a9de78b83c8f836fae857cdb1c5fa379b1a8ba796f88b34891fed9a8325a7dc"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:40:00 crc kubenswrapper[4835]: I0201 07:40:00.541612 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:40:00 crc kubenswrapper[4835]: I0201 07:40:00.541710 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://3a9de78b83c8f836fae857cdb1c5fa379b1a8ba796f88b34891fed9a8325a7dc" gracePeriod=30 Feb 01 07:40:00 crc kubenswrapper[4835]: I0201 07:40:00.542942 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:40:00 crc kubenswrapper[4835]: E0201 07:40:00.696065 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:40:01 crc kubenswrapper[4835]: I0201 07:40:01.270775 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="3a9de78b83c8f836fae857cdb1c5fa379b1a8ba796f88b34891fed9a8325a7dc" exitCode=0 Feb 01 07:40:01 crc kubenswrapper[4835]: I0201 07:40:01.270855 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"3a9de78b83c8f836fae857cdb1c5fa379b1a8ba796f88b34891fed9a8325a7dc"} Feb 01 07:40:01 crc kubenswrapper[4835]: I0201 07:40:01.271263 4835 scope.go:117] "RemoveContainer" containerID="cec496cd92a05990404df717e665f186b5864c07a6992a24064747a173443941" Feb 01 07:40:01 crc kubenswrapper[4835]: I0201 07:40:01.272090 4835 scope.go:117] "RemoveContainer" containerID="3a9de78b83c8f836fae857cdb1c5fa379b1a8ba796f88b34891fed9a8325a7dc" Feb 01 07:40:01 crc kubenswrapper[4835]: I0201 07:40:01.272150 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:40:01 crc kubenswrapper[4835]: E0201 07:40:01.272720 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:40:05 crc kubenswrapper[4835]: I0201 07:40:05.567068 4835 scope.go:117] "RemoveContainer" containerID="651627147b77ab4d732fd0bc91f5ae77cfe8b2e3dbb977dff79987b3679cfd17" Feb 01 07:40:05 crc kubenswrapper[4835]: I0201 07:40:05.567473 4835 scope.go:117] "RemoveContainer" containerID="5470c3d8bb06e025047521e30bf183ef333f14764128da9dc890913bbb199e2c" Feb 01 07:40:05 crc kubenswrapper[4835]: I0201 07:40:05.567589 4835 scope.go:117] "RemoveContainer" containerID="ed0db9d39b522037928ecbdca43640fd1a29af7ddf7d80fc40dff0bb19506f6e" Feb 01 07:40:05 crc kubenswrapper[4835]: E0201 07:40:05.567877 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:40:13 crc kubenswrapper[4835]: I0201 07:40:13.566259 4835 scope.go:117] "RemoveContainer" containerID="3a9de78b83c8f836fae857cdb1c5fa379b1a8ba796f88b34891fed9a8325a7dc" Feb 01 07:40:13 crc kubenswrapper[4835]: I0201 07:40:13.566825 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:40:13 crc kubenswrapper[4835]: E0201 07:40:13.567090 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:40:18 crc kubenswrapper[4835]: I0201 07:40:18.566614 4835 scope.go:117] "RemoveContainer" containerID="651627147b77ab4d732fd0bc91f5ae77cfe8b2e3dbb977dff79987b3679cfd17" Feb 01 07:40:18 crc kubenswrapper[4835]: I0201 07:40:18.566916 4835 scope.go:117] "RemoveContainer" containerID="5470c3d8bb06e025047521e30bf183ef333f14764128da9dc890913bbb199e2c" Feb 01 07:40:18 crc kubenswrapper[4835]: I0201 07:40:18.566997 4835 scope.go:117] "RemoveContainer" containerID="ed0db9d39b522037928ecbdca43640fd1a29af7ddf7d80fc40dff0bb19506f6e" Feb 01 07:40:19 crc kubenswrapper[4835]: I0201 07:40:19.452145 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="cca7a262e0776577eba905cce210509fc2c1a91b31f942b1bede0077a4431e65" exitCode=1 Feb 01 07:40:19 crc kubenswrapper[4835]: I0201 07:40:19.452539 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="c67489a852fc678b8b8070bdd6c72c43149b43e5cf022690eb1335f307406b4a" exitCode=1 Feb 01 07:40:19 crc kubenswrapper[4835]: I0201 07:40:19.452259 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"af3442fc69acaeba80a19e27f306935ce2d9985a759851dde5cfbdccd33c924b"} Feb 01 07:40:19 crc kubenswrapper[4835]: I0201 07:40:19.452587 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"cca7a262e0776577eba905cce210509fc2c1a91b31f942b1bede0077a4431e65"} Feb 01 07:40:19 crc kubenswrapper[4835]: I0201 07:40:19.452607 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"c67489a852fc678b8b8070bdd6c72c43149b43e5cf022690eb1335f307406b4a"} Feb 01 07:40:19 crc kubenswrapper[4835]: I0201 07:40:19.452627 4835 scope.go:117] "RemoveContainer" containerID="5470c3d8bb06e025047521e30bf183ef333f14764128da9dc890913bbb199e2c" Feb 01 07:40:19 crc kubenswrapper[4835]: I0201 07:40:19.453658 4835 scope.go:117] "RemoveContainer" containerID="c67489a852fc678b8b8070bdd6c72c43149b43e5cf022690eb1335f307406b4a" Feb 01 07:40:19 crc kubenswrapper[4835]: I0201 07:40:19.453854 4835 scope.go:117] "RemoveContainer" containerID="cca7a262e0776577eba905cce210509fc2c1a91b31f942b1bede0077a4431e65" Feb 01 07:40:19 crc kubenswrapper[4835]: E0201 07:40:19.463513 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:40:19 crc kubenswrapper[4835]: I0201 07:40:19.520111 4835 scope.go:117] "RemoveContainer" containerID="651627147b77ab4d732fd0bc91f5ae77cfe8b2e3dbb977dff79987b3679cfd17" Feb 01 07:40:20 crc kubenswrapper[4835]: I0201 07:40:20.476807 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="af3442fc69acaeba80a19e27f306935ce2d9985a759851dde5cfbdccd33c924b" exitCode=1 Feb 01 07:40:20 crc kubenswrapper[4835]: I0201 07:40:20.476886 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"af3442fc69acaeba80a19e27f306935ce2d9985a759851dde5cfbdccd33c924b"} Feb 01 07:40:20 crc kubenswrapper[4835]: I0201 07:40:20.477266 4835 scope.go:117] "RemoveContainer" containerID="ed0db9d39b522037928ecbdca43640fd1a29af7ddf7d80fc40dff0bb19506f6e" Feb 01 07:40:20 crc kubenswrapper[4835]: I0201 07:40:20.479852 4835 scope.go:117] "RemoveContainer" containerID="c67489a852fc678b8b8070bdd6c72c43149b43e5cf022690eb1335f307406b4a" Feb 01 07:40:20 crc kubenswrapper[4835]: I0201 07:40:20.480188 4835 scope.go:117] "RemoveContainer" containerID="cca7a262e0776577eba905cce210509fc2c1a91b31f942b1bede0077a4431e65" Feb 01 07:40:20 crc kubenswrapper[4835]: I0201 07:40:20.481867 4835 scope.go:117] "RemoveContainer" containerID="af3442fc69acaeba80a19e27f306935ce2d9985a759851dde5cfbdccd33c924b" Feb 01 07:40:20 crc kubenswrapper[4835]: E0201 07:40:20.482533 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:40:25 crc kubenswrapper[4835]: I0201 07:40:25.191502 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:40:25 crc kubenswrapper[4835]: I0201 07:40:25.191865 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:40:28 crc kubenswrapper[4835]: I0201 07:40:28.566876 4835 scope.go:117] "RemoveContainer" containerID="3a9de78b83c8f836fae857cdb1c5fa379b1a8ba796f88b34891fed9a8325a7dc" Feb 01 07:40:28 crc kubenswrapper[4835]: I0201 07:40:28.567178 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:40:28 crc kubenswrapper[4835]: E0201 07:40:28.567372 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:40:29 crc kubenswrapper[4835]: E0201 07:40:29.724037 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 07:40:30 crc kubenswrapper[4835]: I0201 07:40:30.601297 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:40:31 crc kubenswrapper[4835]: I0201 07:40:31.567857 4835 scope.go:117] "RemoveContainer" containerID="c67489a852fc678b8b8070bdd6c72c43149b43e5cf022690eb1335f307406b4a" Feb 01 07:40:31 crc kubenswrapper[4835]: I0201 07:40:31.568594 4835 scope.go:117] "RemoveContainer" containerID="cca7a262e0776577eba905cce210509fc2c1a91b31f942b1bede0077a4431e65" Feb 01 07:40:31 crc kubenswrapper[4835]: I0201 07:40:31.568802 4835 scope.go:117] "RemoveContainer" containerID="af3442fc69acaeba80a19e27f306935ce2d9985a759851dde5cfbdccd33c924b" Feb 01 07:40:31 crc kubenswrapper[4835]: E0201 07:40:31.569366 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:40:34 crc kubenswrapper[4835]: I0201 07:40:34.872968 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:40:34 crc kubenswrapper[4835]: E0201 07:40:34.873197 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:40:34 crc kubenswrapper[4835]: E0201 07:40:34.873677 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:42:36.873657218 +0000 UTC m=+1229.994093672 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:40:43 crc kubenswrapper[4835]: I0201 07:40:43.566665 4835 scope.go:117] "RemoveContainer" containerID="3a9de78b83c8f836fae857cdb1c5fa379b1a8ba796f88b34891fed9a8325a7dc" Feb 01 07:40:43 crc kubenswrapper[4835]: I0201 07:40:43.567379 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:40:43 crc kubenswrapper[4835]: I0201 07:40:43.568129 4835 scope.go:117] "RemoveContainer" containerID="c67489a852fc678b8b8070bdd6c72c43149b43e5cf022690eb1335f307406b4a" Feb 01 07:40:43 crc kubenswrapper[4835]: I0201 07:40:43.568277 4835 scope.go:117] "RemoveContainer" containerID="cca7a262e0776577eba905cce210509fc2c1a91b31f942b1bede0077a4431e65" Feb 01 07:40:43 crc kubenswrapper[4835]: I0201 07:40:43.568508 4835 scope.go:117] "RemoveContainer" containerID="af3442fc69acaeba80a19e27f306935ce2d9985a759851dde5cfbdccd33c924b" Feb 01 07:40:43 crc kubenswrapper[4835]: E0201 07:40:43.569035 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:40:43 crc kubenswrapper[4835]: E0201 07:40:43.778612 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:40:44 crc kubenswrapper[4835]: I0201 07:40:44.752976 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a"} Feb 01 07:40:44 crc kubenswrapper[4835]: I0201 07:40:44.753285 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:40:44 crc kubenswrapper[4835]: I0201 07:40:44.753811 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:40:44 crc kubenswrapper[4835]: E0201 07:40:44.754174 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:40:45 crc kubenswrapper[4835]: I0201 07:40:45.760953 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:40:45 crc kubenswrapper[4835]: E0201 07:40:45.761656 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:40:48 crc kubenswrapper[4835]: I0201 07:40:48.540193 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:40:51 crc kubenswrapper[4835]: I0201 07:40:51.538699 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:40:52 crc kubenswrapper[4835]: I0201 07:40:52.539887 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:40:54 crc kubenswrapper[4835]: I0201 07:40:54.538215 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:40:54 crc kubenswrapper[4835]: I0201 07:40:54.538311 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:40:54 crc kubenswrapper[4835]: I0201 07:40:54.539247 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:40:54 crc kubenswrapper[4835]: I0201 07:40:54.539271 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:40:54 crc kubenswrapper[4835]: I0201 07:40:54.539303 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" gracePeriod=30 Feb 01 07:40:54 crc kubenswrapper[4835]: I0201 07:40:54.540111 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:40:54 crc kubenswrapper[4835]: E0201 07:40:54.668068 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:40:54 crc kubenswrapper[4835]: I0201 07:40:54.842676 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" exitCode=0 Feb 01 07:40:54 crc kubenswrapper[4835]: I0201 07:40:54.842690 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a"} Feb 01 07:40:54 crc kubenswrapper[4835]: I0201 07:40:54.842802 4835 scope.go:117] "RemoveContainer" containerID="3a9de78b83c8f836fae857cdb1c5fa379b1a8ba796f88b34891fed9a8325a7dc" Feb 01 07:40:54 crc kubenswrapper[4835]: I0201 07:40:54.843785 4835 scope.go:117] "RemoveContainer" containerID="060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" Feb 01 07:40:54 crc kubenswrapper[4835]: I0201 07:40:54.843840 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:40:54 crc kubenswrapper[4835]: E0201 07:40:54.844568 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:40:55 crc kubenswrapper[4835]: I0201 07:40:55.192201 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:40:55 crc kubenswrapper[4835]: I0201 07:40:55.192302 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:40:55 crc kubenswrapper[4835]: I0201 07:40:55.567712 4835 scope.go:117] "RemoveContainer" containerID="c67489a852fc678b8b8070bdd6c72c43149b43e5cf022690eb1335f307406b4a" Feb 01 07:40:55 crc kubenswrapper[4835]: I0201 07:40:55.567838 4835 scope.go:117] "RemoveContainer" containerID="cca7a262e0776577eba905cce210509fc2c1a91b31f942b1bede0077a4431e65" Feb 01 07:40:55 crc kubenswrapper[4835]: I0201 07:40:55.568023 4835 scope.go:117] "RemoveContainer" containerID="af3442fc69acaeba80a19e27f306935ce2d9985a759851dde5cfbdccd33c924b" Feb 01 07:40:55 crc kubenswrapper[4835]: E0201 07:40:55.568698 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:41:08 crc kubenswrapper[4835]: I0201 07:41:08.567808 4835 scope.go:117] "RemoveContainer" containerID="c67489a852fc678b8b8070bdd6c72c43149b43e5cf022690eb1335f307406b4a" Feb 01 07:41:08 crc kubenswrapper[4835]: I0201 07:41:08.568658 4835 scope.go:117] "RemoveContainer" containerID="cca7a262e0776577eba905cce210509fc2c1a91b31f942b1bede0077a4431e65" Feb 01 07:41:08 crc kubenswrapper[4835]: I0201 07:41:08.568852 4835 scope.go:117] "RemoveContainer" containerID="af3442fc69acaeba80a19e27f306935ce2d9985a759851dde5cfbdccd33c924b" Feb 01 07:41:08 crc kubenswrapper[4835]: E0201 07:41:08.569440 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:41:09 crc kubenswrapper[4835]: I0201 07:41:09.567751 4835 scope.go:117] "RemoveContainer" containerID="060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" Feb 01 07:41:09 crc kubenswrapper[4835]: I0201 07:41:09.567795 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:41:09 crc kubenswrapper[4835]: E0201 07:41:09.568121 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:41:21 crc kubenswrapper[4835]: I0201 07:41:21.567369 4835 scope.go:117] "RemoveContainer" containerID="c67489a852fc678b8b8070bdd6c72c43149b43e5cf022690eb1335f307406b4a" Feb 01 07:41:21 crc kubenswrapper[4835]: I0201 07:41:21.568068 4835 scope.go:117] "RemoveContainer" containerID="cca7a262e0776577eba905cce210509fc2c1a91b31f942b1bede0077a4431e65" Feb 01 07:41:21 crc kubenswrapper[4835]: I0201 07:41:21.568249 4835 scope.go:117] "RemoveContainer" containerID="af3442fc69acaeba80a19e27f306935ce2d9985a759851dde5cfbdccd33c924b" Feb 01 07:41:21 crc kubenswrapper[4835]: E0201 07:41:21.568827 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:41:24 crc kubenswrapper[4835]: I0201 07:41:24.567571 4835 scope.go:117] "RemoveContainer" containerID="060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" Feb 01 07:41:24 crc kubenswrapper[4835]: I0201 07:41:24.567613 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:41:24 crc kubenswrapper[4835]: E0201 07:41:24.846385 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:41:25 crc kubenswrapper[4835]: I0201 07:41:25.133898 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50"} Feb 01 07:41:25 crc kubenswrapper[4835]: I0201 07:41:25.134426 4835 scope.go:117] "RemoveContainer" containerID="060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" Feb 01 07:41:25 crc kubenswrapper[4835]: E0201 07:41:25.134611 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:41:25 crc kubenswrapper[4835]: I0201 07:41:25.134747 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:41:25 crc kubenswrapper[4835]: I0201 07:41:25.192521 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:41:25 crc kubenswrapper[4835]: I0201 07:41:25.192926 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:41:25 crc kubenswrapper[4835]: I0201 07:41:25.193149 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:41:25 crc kubenswrapper[4835]: I0201 07:41:25.194547 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"19428f932c6c98ecc149a201b9cb2f965faa26b06f4629d2e4af89e8080412f3"} pod="openshift-machine-config-operator/machine-config-daemon-wdt78" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 07:41:25 crc kubenswrapper[4835]: I0201 07:41:25.194845 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" containerID="cri-o://19428f932c6c98ecc149a201b9cb2f965faa26b06f4629d2e4af89e8080412f3" gracePeriod=600 Feb 01 07:41:26 crc kubenswrapper[4835]: I0201 07:41:26.145086 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" exitCode=1 Feb 01 07:41:26 crc kubenswrapper[4835]: I0201 07:41:26.145140 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50"} Feb 01 07:41:26 crc kubenswrapper[4835]: I0201 07:41:26.145806 4835 scope.go:117] "RemoveContainer" containerID="dd52d3e958088dbad8f8abb2040b2943b0a889cd65a7e94d1aa15a35287dab1a" Feb 01 07:41:26 crc kubenswrapper[4835]: I0201 07:41:26.146026 4835 scope.go:117] "RemoveContainer" containerID="060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" Feb 01 07:41:26 crc kubenswrapper[4835]: I0201 07:41:26.146116 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:41:26 crc kubenswrapper[4835]: E0201 07:41:26.146725 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:41:26 crc kubenswrapper[4835]: I0201 07:41:26.151020 4835 generic.go:334] "Generic (PLEG): container finished" podID="303c450e-4b2d-4908-84e6-df8b444ed640" containerID="19428f932c6c98ecc149a201b9cb2f965faa26b06f4629d2e4af89e8080412f3" exitCode=0 Feb 01 07:41:26 crc kubenswrapper[4835]: I0201 07:41:26.151086 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerDied","Data":"19428f932c6c98ecc149a201b9cb2f965faa26b06f4629d2e4af89e8080412f3"} Feb 01 07:41:26 crc kubenswrapper[4835]: I0201 07:41:26.151138 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"a43725792d229350ec7471be026c4c547e893839692a410ac3e424adc0af5ced"} Feb 01 07:41:26 crc kubenswrapper[4835]: I0201 07:41:26.196123 4835 scope.go:117] "RemoveContainer" containerID="9ccb60f81487a17626bf941abb39b090063342e92bdcf8f103587fb1912c3a05" Feb 01 07:41:27 crc kubenswrapper[4835]: I0201 07:41:27.168498 4835 scope.go:117] "RemoveContainer" containerID="060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" Feb 01 07:41:27 crc kubenswrapper[4835]: I0201 07:41:27.168837 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:41:27 crc kubenswrapper[4835]: E0201 07:41:27.169191 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:41:27 crc kubenswrapper[4835]: I0201 07:41:27.535396 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:41:28 crc kubenswrapper[4835]: I0201 07:41:28.182779 4835 scope.go:117] "RemoveContainer" containerID="060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" Feb 01 07:41:28 crc kubenswrapper[4835]: I0201 07:41:28.182822 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:41:28 crc kubenswrapper[4835]: E0201 07:41:28.183170 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:41:32 crc kubenswrapper[4835]: I0201 07:41:32.568105 4835 scope.go:117] "RemoveContainer" containerID="c67489a852fc678b8b8070bdd6c72c43149b43e5cf022690eb1335f307406b4a" Feb 01 07:41:32 crc kubenswrapper[4835]: I0201 07:41:32.568973 4835 scope.go:117] "RemoveContainer" containerID="cca7a262e0776577eba905cce210509fc2c1a91b31f942b1bede0077a4431e65" Feb 01 07:41:32 crc kubenswrapper[4835]: I0201 07:41:32.569154 4835 scope.go:117] "RemoveContainer" containerID="af3442fc69acaeba80a19e27f306935ce2d9985a759851dde5cfbdccd33c924b" Feb 01 07:41:32 crc kubenswrapper[4835]: E0201 07:41:32.569867 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:41:40 crc kubenswrapper[4835]: I0201 07:41:40.567158 4835 scope.go:117] "RemoveContainer" containerID="060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" Feb 01 07:41:40 crc kubenswrapper[4835]: I0201 07:41:40.567744 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:41:40 crc kubenswrapper[4835]: E0201 07:41:40.567967 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:41:44 crc kubenswrapper[4835]: I0201 07:41:44.567154 4835 scope.go:117] "RemoveContainer" containerID="c67489a852fc678b8b8070bdd6c72c43149b43e5cf022690eb1335f307406b4a" Feb 01 07:41:44 crc kubenswrapper[4835]: I0201 07:41:44.567820 4835 scope.go:117] "RemoveContainer" containerID="cca7a262e0776577eba905cce210509fc2c1a91b31f942b1bede0077a4431e65" Feb 01 07:41:44 crc kubenswrapper[4835]: I0201 07:41:44.567910 4835 scope.go:117] "RemoveContainer" containerID="af3442fc69acaeba80a19e27f306935ce2d9985a759851dde5cfbdccd33c924b" Feb 01 07:41:45 crc kubenswrapper[4835]: I0201 07:41:45.393067 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" exitCode=1 Feb 01 07:41:45 crc kubenswrapper[4835]: I0201 07:41:45.393154 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6"} Feb 01 07:41:45 crc kubenswrapper[4835]: I0201 07:41:45.393462 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac"} Feb 01 07:41:45 crc kubenswrapper[4835]: I0201 07:41:45.393481 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948"} Feb 01 07:41:45 crc kubenswrapper[4835]: I0201 07:41:45.393505 4835 scope.go:117] "RemoveContainer" containerID="c67489a852fc678b8b8070bdd6c72c43149b43e5cf022690eb1335f307406b4a" Feb 01 07:41:45 crc kubenswrapper[4835]: I0201 07:41:45.394524 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:41:45 crc kubenswrapper[4835]: E0201 07:41:45.395243 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:41:46 crc kubenswrapper[4835]: I0201 07:41:46.407120 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" exitCode=1 Feb 01 07:41:46 crc kubenswrapper[4835]: I0201 07:41:46.407388 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" exitCode=1 Feb 01 07:41:46 crc kubenswrapper[4835]: I0201 07:41:46.407190 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6"} Feb 01 07:41:46 crc kubenswrapper[4835]: I0201 07:41:46.407448 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac"} Feb 01 07:41:46 crc kubenswrapper[4835]: I0201 07:41:46.407472 4835 scope.go:117] "RemoveContainer" containerID="af3442fc69acaeba80a19e27f306935ce2d9985a759851dde5cfbdccd33c924b" Feb 01 07:41:46 crc kubenswrapper[4835]: I0201 07:41:46.408130 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:41:46 crc kubenswrapper[4835]: I0201 07:41:46.408191 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:41:46 crc kubenswrapper[4835]: I0201 07:41:46.408285 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:41:46 crc kubenswrapper[4835]: E0201 07:41:46.408671 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:41:46 crc kubenswrapper[4835]: I0201 07:41:46.478334 4835 scope.go:117] "RemoveContainer" containerID="cca7a262e0776577eba905cce210509fc2c1a91b31f942b1bede0077a4431e65" Feb 01 07:41:47 crc kubenswrapper[4835]: I0201 07:41:47.442982 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:41:47 crc kubenswrapper[4835]: I0201 07:41:47.443105 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:41:47 crc kubenswrapper[4835]: I0201 07:41:47.443281 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:41:47 crc kubenswrapper[4835]: E0201 07:41:47.443842 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:41:55 crc kubenswrapper[4835]: I0201 07:41:55.569306 4835 scope.go:117] "RemoveContainer" containerID="060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" Feb 01 07:41:55 crc kubenswrapper[4835]: I0201 07:41:55.570012 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:41:55 crc kubenswrapper[4835]: E0201 07:41:55.570345 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:41:57 crc kubenswrapper[4835]: I0201 07:41:57.543807 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="feb2c5663f63accc851097dd3f05b8f4f19e67efe2c719e8d3a4538c5779d9f1" exitCode=1 Feb 01 07:41:57 crc kubenswrapper[4835]: I0201 07:41:57.543900 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"feb2c5663f63accc851097dd3f05b8f4f19e67efe2c719e8d3a4538c5779d9f1"} Feb 01 07:41:57 crc kubenswrapper[4835]: I0201 07:41:57.544362 4835 scope.go:117] "RemoveContainer" containerID="ce9f6e51f49479167482c65a57955f65790012dea41865e75c987db5f30a8585" Feb 01 07:41:57 crc kubenswrapper[4835]: I0201 07:41:57.546529 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:41:57 crc kubenswrapper[4835]: I0201 07:41:57.546622 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:41:57 crc kubenswrapper[4835]: I0201 07:41:57.546727 4835 scope.go:117] "RemoveContainer" containerID="feb2c5663f63accc851097dd3f05b8f4f19e67efe2c719e8d3a4538c5779d9f1" Feb 01 07:41:57 crc kubenswrapper[4835]: I0201 07:41:57.546751 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:41:57 crc kubenswrapper[4835]: E0201 07:41:57.547219 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:42:07 crc kubenswrapper[4835]: I0201 07:42:07.585019 4835 scope.go:117] "RemoveContainer" containerID="060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" Feb 01 07:42:07 crc kubenswrapper[4835]: I0201 07:42:07.585882 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:42:07 crc kubenswrapper[4835]: E0201 07:42:07.587569 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:42:09 crc kubenswrapper[4835]: I0201 07:42:09.567950 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:42:09 crc kubenswrapper[4835]: I0201 07:42:09.568703 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:42:09 crc kubenswrapper[4835]: I0201 07:42:09.568893 4835 scope.go:117] "RemoveContainer" containerID="feb2c5663f63accc851097dd3f05b8f4f19e67efe2c719e8d3a4538c5779d9f1" Feb 01 07:42:09 crc kubenswrapper[4835]: I0201 07:42:09.568917 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:42:09 crc kubenswrapper[4835]: E0201 07:42:09.743742 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:42:10 crc kubenswrapper[4835]: I0201 07:42:10.691482 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"419ab68c1eadc99bff71a26d28334bac6306a91472d2659f54afabe19795872b"} Feb 01 07:42:10 crc kubenswrapper[4835]: I0201 07:42:10.692673 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:42:10 crc kubenswrapper[4835]: I0201 07:42:10.692761 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:42:10 crc kubenswrapper[4835]: I0201 07:42:10.692883 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:42:10 crc kubenswrapper[4835]: E0201 07:42:10.693211 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:42:21 crc kubenswrapper[4835]: I0201 07:42:21.567842 4835 scope.go:117] "RemoveContainer" containerID="060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" Feb 01 07:42:21 crc kubenswrapper[4835]: I0201 07:42:21.568590 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:42:21 crc kubenswrapper[4835]: E0201 07:42:21.755895 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:42:21 crc kubenswrapper[4835]: I0201 07:42:21.787460 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809"} Feb 01 07:42:21 crc kubenswrapper[4835]: I0201 07:42:21.787700 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:42:21 crc kubenswrapper[4835]: I0201 07:42:21.788066 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:42:21 crc kubenswrapper[4835]: E0201 07:42:21.788447 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:42:22 crc kubenswrapper[4835]: I0201 07:42:22.796039 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:42:22 crc kubenswrapper[4835]: E0201 07:42:22.796610 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:42:25 crc kubenswrapper[4835]: I0201 07:42:25.566785 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:42:25 crc kubenswrapper[4835]: I0201 07:42:25.566873 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:42:25 crc kubenswrapper[4835]: I0201 07:42:25.567008 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:42:25 crc kubenswrapper[4835]: E0201 07:42:25.567378 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:42:27 crc kubenswrapper[4835]: I0201 07:42:27.539604 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:42:27 crc kubenswrapper[4835]: I0201 07:42:27.539867 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:42:30 crc kubenswrapper[4835]: I0201 07:42:30.537987 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:42:32 crc kubenswrapper[4835]: I0201 07:42:32.537539 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:42:33 crc kubenswrapper[4835]: I0201 07:42:33.537822 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:42:33 crc kubenswrapper[4835]: I0201 07:42:33.537916 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:42:33 crc kubenswrapper[4835]: I0201 07:42:33.538662 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:42:33 crc kubenswrapper[4835]: I0201 07:42:33.538686 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:42:33 crc kubenswrapper[4835]: I0201 07:42:33.538722 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" gracePeriod=30 Feb 01 07:42:33 crc kubenswrapper[4835]: I0201 07:42:33.539573 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:42:33 crc kubenswrapper[4835]: E0201 07:42:33.603012 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 07:42:33 crc kubenswrapper[4835]: E0201 07:42:33.661710 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:42:33 crc kubenswrapper[4835]: I0201 07:42:33.895109 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" exitCode=0 Feb 01 07:42:33 crc kubenswrapper[4835]: I0201 07:42:33.895186 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809"} Feb 01 07:42:33 crc kubenswrapper[4835]: I0201 07:42:33.895493 4835 scope.go:117] "RemoveContainer" containerID="060875a78e413ef804483530b54924ab196e3cb7a16f3c79784e07336dfd379a" Feb 01 07:42:33 crc kubenswrapper[4835]: I0201 07:42:33.895779 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:42:33 crc kubenswrapper[4835]: I0201 07:42:33.896586 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:42:33 crc kubenswrapper[4835]: I0201 07:42:33.896641 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:42:33 crc kubenswrapper[4835]: E0201 07:42:33.896981 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:42:36 crc kubenswrapper[4835]: I0201 07:42:36.878859 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:42:36 crc kubenswrapper[4835]: E0201 07:42:36.879050 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:42:36 crc kubenswrapper[4835]: E0201 07:42:36.880045 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:44:38.880002249 +0000 UTC m=+1352.000438713 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:42:38 crc kubenswrapper[4835]: I0201 07:42:38.568661 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:42:38 crc kubenswrapper[4835]: I0201 07:42:38.569177 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:42:38 crc kubenswrapper[4835]: I0201 07:42:38.569455 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:42:38 crc kubenswrapper[4835]: E0201 07:42:38.570247 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:42:48 crc kubenswrapper[4835]: I0201 07:42:48.568397 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:42:48 crc kubenswrapper[4835]: I0201 07:42:48.569085 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:42:48 crc kubenswrapper[4835]: E0201 07:42:48.569543 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:42:51 crc kubenswrapper[4835]: I0201 07:42:51.567995 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:42:51 crc kubenswrapper[4835]: I0201 07:42:51.568615 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:42:51 crc kubenswrapper[4835]: I0201 07:42:51.568881 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:42:51 crc kubenswrapper[4835]: E0201 07:42:51.569499 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:43:02 crc kubenswrapper[4835]: I0201 07:43:02.567259 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:43:02 crc kubenswrapper[4835]: I0201 07:43:02.567955 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:43:02 crc kubenswrapper[4835]: E0201 07:43:02.568311 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:43:05 crc kubenswrapper[4835]: I0201 07:43:05.568498 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:43:05 crc kubenswrapper[4835]: I0201 07:43:05.568964 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:43:05 crc kubenswrapper[4835]: I0201 07:43:05.569146 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:43:05 crc kubenswrapper[4835]: E0201 07:43:05.569701 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:43:14 crc kubenswrapper[4835]: I0201 07:43:14.568024 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:43:14 crc kubenswrapper[4835]: I0201 07:43:14.568780 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:43:14 crc kubenswrapper[4835]: E0201 07:43:14.569206 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:43:19 crc kubenswrapper[4835]: I0201 07:43:19.567482 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:43:19 crc kubenswrapper[4835]: I0201 07:43:19.567910 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:43:19 crc kubenswrapper[4835]: I0201 07:43:19.568040 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:43:19 crc kubenswrapper[4835]: E0201 07:43:19.568370 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:43:24 crc kubenswrapper[4835]: I0201 07:43:24.354008 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="fa5aff8be1093aa2c10f2b4af85287d1729e836661be58a64baa1c833802045c" exitCode=1 Feb 01 07:43:24 crc kubenswrapper[4835]: I0201 07:43:24.354060 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"fa5aff8be1093aa2c10f2b4af85287d1729e836661be58a64baa1c833802045c"} Feb 01 07:43:24 crc kubenswrapper[4835]: I0201 07:43:24.354536 4835 scope.go:117] "RemoveContainer" containerID="0a25ac97b5294b86a329b0b8a00b6a7ec519f70771d4bc4890be6a3eaa416540" Feb 01 07:43:24 crc kubenswrapper[4835]: I0201 07:43:24.355945 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:43:24 crc kubenswrapper[4835]: I0201 07:43:24.364702 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:43:24 crc kubenswrapper[4835]: I0201 07:43:24.364778 4835 scope.go:117] "RemoveContainer" containerID="fa5aff8be1093aa2c10f2b4af85287d1729e836661be58a64baa1c833802045c" Feb 01 07:43:24 crc kubenswrapper[4835]: I0201 07:43:24.364928 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:43:24 crc kubenswrapper[4835]: E0201 07:43:24.365803 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:43:25 crc kubenswrapper[4835]: I0201 07:43:25.192189 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:43:25 crc kubenswrapper[4835]: I0201 07:43:25.192296 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:43:29 crc kubenswrapper[4835]: I0201 07:43:29.567107 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:43:29 crc kubenswrapper[4835]: I0201 07:43:29.569562 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:43:29 crc kubenswrapper[4835]: E0201 07:43:29.570077 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:43:39 crc kubenswrapper[4835]: I0201 07:43:39.567701 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:43:39 crc kubenswrapper[4835]: I0201 07:43:39.568507 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:43:39 crc kubenswrapper[4835]: I0201 07:43:39.568565 4835 scope.go:117] "RemoveContainer" containerID="fa5aff8be1093aa2c10f2b4af85287d1729e836661be58a64baa1c833802045c" Feb 01 07:43:39 crc kubenswrapper[4835]: I0201 07:43:39.568720 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:43:39 crc kubenswrapper[4835]: E0201 07:43:39.774591 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:43:40 crc kubenswrapper[4835]: I0201 07:43:40.513147 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"675783f3860e44aa26dc702d2c9b79308d6ca04cb0bf0b461ea1c6f19635f2c4"} Feb 01 07:43:40 crc kubenswrapper[4835]: I0201 07:43:40.514175 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:43:40 crc kubenswrapper[4835]: I0201 07:43:40.514299 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:43:40 crc kubenswrapper[4835]: I0201 07:43:40.514518 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:43:40 crc kubenswrapper[4835]: E0201 07:43:40.515207 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:43:44 crc kubenswrapper[4835]: I0201 07:43:44.567782 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:43:44 crc kubenswrapper[4835]: I0201 07:43:44.568685 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:43:44 crc kubenswrapper[4835]: E0201 07:43:44.569355 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:43:55 crc kubenswrapper[4835]: I0201 07:43:55.191939 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:43:55 crc kubenswrapper[4835]: I0201 07:43:55.192599 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:43:55 crc kubenswrapper[4835]: I0201 07:43:55.568825 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:43:55 crc kubenswrapper[4835]: I0201 07:43:55.568954 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:43:55 crc kubenswrapper[4835]: I0201 07:43:55.569133 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:43:55 crc kubenswrapper[4835]: E0201 07:43:55.569742 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:43:58 crc kubenswrapper[4835]: I0201 07:43:58.699388 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="419ab68c1eadc99bff71a26d28334bac6306a91472d2659f54afabe19795872b" exitCode=1 Feb 01 07:43:58 crc kubenswrapper[4835]: I0201 07:43:58.699466 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"419ab68c1eadc99bff71a26d28334bac6306a91472d2659f54afabe19795872b"} Feb 01 07:43:58 crc kubenswrapper[4835]: I0201 07:43:58.700651 4835 scope.go:117] "RemoveContainer" containerID="feb2c5663f63accc851097dd3f05b8f4f19e67efe2c719e8d3a4538c5779d9f1" Feb 01 07:43:58 crc kubenswrapper[4835]: I0201 07:43:58.701642 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:43:58 crc kubenswrapper[4835]: I0201 07:43:58.701761 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:43:58 crc kubenswrapper[4835]: I0201 07:43:58.701923 4835 scope.go:117] "RemoveContainer" containerID="419ab68c1eadc99bff71a26d28334bac6306a91472d2659f54afabe19795872b" Feb 01 07:43:58 crc kubenswrapper[4835]: I0201 07:43:58.701957 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:43:58 crc kubenswrapper[4835]: E0201 07:43:58.702537 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:43:59 crc kubenswrapper[4835]: I0201 07:43:59.566760 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:43:59 crc kubenswrapper[4835]: I0201 07:43:59.567073 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:43:59 crc kubenswrapper[4835]: E0201 07:43:59.567302 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:44:10 crc kubenswrapper[4835]: I0201 07:44:10.567013 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:44:10 crc kubenswrapper[4835]: I0201 07:44:10.567725 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:44:10 crc kubenswrapper[4835]: E0201 07:44:10.745455 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:44:10 crc kubenswrapper[4835]: I0201 07:44:10.901076 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d"} Feb 01 07:44:10 crc kubenswrapper[4835]: I0201 07:44:10.901806 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:44:10 crc kubenswrapper[4835]: I0201 07:44:10.902523 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:44:10 crc kubenswrapper[4835]: E0201 07:44:10.903154 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:44:11 crc kubenswrapper[4835]: I0201 07:44:11.931279 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" exitCode=1 Feb 01 07:44:11 crc kubenswrapper[4835]: I0201 07:44:11.931338 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d"} Feb 01 07:44:11 crc kubenswrapper[4835]: I0201 07:44:11.931379 4835 scope.go:117] "RemoveContainer" containerID="ec837ed41dd6d480dabc7407944e91632c5429fa4578edfdfeb1deda91201e50" Feb 01 07:44:11 crc kubenswrapper[4835]: I0201 07:44:11.932141 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:44:11 crc kubenswrapper[4835]: I0201 07:44:11.932176 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:44:11 crc kubenswrapper[4835]: E0201 07:44:11.932668 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:44:12 crc kubenswrapper[4835]: I0201 07:44:12.535640 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:44:12 crc kubenswrapper[4835]: I0201 07:44:12.567152 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:44:12 crc kubenswrapper[4835]: I0201 07:44:12.567239 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:44:12 crc kubenswrapper[4835]: I0201 07:44:12.567394 4835 scope.go:117] "RemoveContainer" containerID="419ab68c1eadc99bff71a26d28334bac6306a91472d2659f54afabe19795872b" Feb 01 07:44:12 crc kubenswrapper[4835]: I0201 07:44:12.567403 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:44:12 crc kubenswrapper[4835]: E0201 07:44:12.567992 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:44:12 crc kubenswrapper[4835]: I0201 07:44:12.945580 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:44:12 crc kubenswrapper[4835]: I0201 07:44:12.945619 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:44:12 crc kubenswrapper[4835]: E0201 07:44:12.945979 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:44:13 crc kubenswrapper[4835]: I0201 07:44:13.954734 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:44:13 crc kubenswrapper[4835]: I0201 07:44:13.954785 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:44:13 crc kubenswrapper[4835]: E0201 07:44:13.955246 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:44:24 crc kubenswrapper[4835]: I0201 07:44:24.582185 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:44:24 crc kubenswrapper[4835]: I0201 07:44:24.583314 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:44:24 crc kubenswrapper[4835]: E0201 07:44:24.583773 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:44:25 crc kubenswrapper[4835]: I0201 07:44:25.192309 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:44:25 crc kubenswrapper[4835]: I0201 07:44:25.192456 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:44:25 crc kubenswrapper[4835]: I0201 07:44:25.192520 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:44:25 crc kubenswrapper[4835]: I0201 07:44:25.193342 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a43725792d229350ec7471be026c4c547e893839692a410ac3e424adc0af5ced"} pod="openshift-machine-config-operator/machine-config-daemon-wdt78" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 07:44:25 crc kubenswrapper[4835]: I0201 07:44:25.193469 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" containerID="cri-o://a43725792d229350ec7471be026c4c547e893839692a410ac3e424adc0af5ced" gracePeriod=600 Feb 01 07:44:26 crc kubenswrapper[4835]: I0201 07:44:26.089024 4835 generic.go:334] "Generic (PLEG): container finished" podID="303c450e-4b2d-4908-84e6-df8b444ed640" containerID="a43725792d229350ec7471be026c4c547e893839692a410ac3e424adc0af5ced" exitCode=0 Feb 01 07:44:26 crc kubenswrapper[4835]: I0201 07:44:26.089390 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerDied","Data":"a43725792d229350ec7471be026c4c547e893839692a410ac3e424adc0af5ced"} Feb 01 07:44:26 crc kubenswrapper[4835]: I0201 07:44:26.089935 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e"} Feb 01 07:44:26 crc kubenswrapper[4835]: I0201 07:44:26.089986 4835 scope.go:117] "RemoveContainer" containerID="19428f932c6c98ecc149a201b9cb2f965faa26b06f4629d2e4af89e8080412f3" Feb 01 07:44:27 crc kubenswrapper[4835]: I0201 07:44:27.577306 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:44:27 crc kubenswrapper[4835]: I0201 07:44:27.577883 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:44:27 crc kubenswrapper[4835]: I0201 07:44:27.578039 4835 scope.go:117] "RemoveContainer" containerID="419ab68c1eadc99bff71a26d28334bac6306a91472d2659f54afabe19795872b" Feb 01 07:44:27 crc kubenswrapper[4835]: I0201 07:44:27.578053 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:44:28 crc kubenswrapper[4835]: I0201 07:44:28.119292 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2"} Feb 01 07:44:28 crc kubenswrapper[4835]: I0201 07:44:28.119869 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d"} Feb 01 07:44:28 crc kubenswrapper[4835]: E0201 07:44:28.929927 4835 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1edd7394_0f8e_4271_8774_f228946e62f3.slice/crio-8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1edd7394_0f8e_4271_8774_f228946e62f3.slice/crio-conmon-8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328.scope\": RecentStats: unable to find data in memory cache]" Feb 01 07:44:29 crc kubenswrapper[4835]: I0201 07:44:29.145365 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" exitCode=1 Feb 01 07:44:29 crc kubenswrapper[4835]: I0201 07:44:29.145453 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" exitCode=1 Feb 01 07:44:29 crc kubenswrapper[4835]: I0201 07:44:29.145469 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" exitCode=1 Feb 01 07:44:29 crc kubenswrapper[4835]: I0201 07:44:29.145471 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2"} Feb 01 07:44:29 crc kubenswrapper[4835]: I0201 07:44:29.145556 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d"} Feb 01 07:44:29 crc kubenswrapper[4835]: I0201 07:44:29.145577 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328"} Feb 01 07:44:29 crc kubenswrapper[4835]: I0201 07:44:29.145596 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"a099e806e124b688716a90012a83109f2769650600cbbb38008ff999723edbe7"} Feb 01 07:44:29 crc kubenswrapper[4835]: I0201 07:44:29.145624 4835 scope.go:117] "RemoveContainer" containerID="14add5b6b6e41abdb7feca0316f8f7a7d42872aabe3bcae0f5ea8a6c586d9aac" Feb 01 07:44:29 crc kubenswrapper[4835]: I0201 07:44:29.146684 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:44:29 crc kubenswrapper[4835]: I0201 07:44:29.146830 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:44:29 crc kubenswrapper[4835]: I0201 07:44:29.147071 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:44:29 crc kubenswrapper[4835]: E0201 07:44:29.147781 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:44:29 crc kubenswrapper[4835]: I0201 07:44:29.212477 4835 scope.go:117] "RemoveContainer" containerID="8079a896e332b6b66e3a290ab330676ebb1b0ada241d1e9c3abf3f3b36f35948" Feb 01 07:44:29 crc kubenswrapper[4835]: I0201 07:44:29.264278 4835 scope.go:117] "RemoveContainer" containerID="c273b76545d339636d6955ebcb81fc4666a51990b01f36ec061cc227106a60e6" Feb 01 07:44:30 crc kubenswrapper[4835]: I0201 07:44:30.182940 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:44:30 crc kubenswrapper[4835]: I0201 07:44:30.183071 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:44:30 crc kubenswrapper[4835]: I0201 07:44:30.183251 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:44:30 crc kubenswrapper[4835]: E0201 07:44:30.185362 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:44:35 crc kubenswrapper[4835]: I0201 07:44:35.567776 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:44:35 crc kubenswrapper[4835]: I0201 07:44:35.568609 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:44:35 crc kubenswrapper[4835]: E0201 07:44:35.568964 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:44:36 crc kubenswrapper[4835]: E0201 07:44:36.899102 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 07:44:37 crc kubenswrapper[4835]: I0201 07:44:37.258446 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:44:38 crc kubenswrapper[4835]: I0201 07:44:38.974982 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:44:38 crc kubenswrapper[4835]: E0201 07:44:38.975130 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:44:38 crc kubenswrapper[4835]: E0201 07:44:38.975212 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:46:40.975191247 +0000 UTC m=+1474.095627681 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:44:45 crc kubenswrapper[4835]: I0201 07:44:45.567612 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:44:45 crc kubenswrapper[4835]: I0201 07:44:45.568372 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:44:45 crc kubenswrapper[4835]: I0201 07:44:45.568563 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:44:45 crc kubenswrapper[4835]: E0201 07:44:45.568976 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:44:46 crc kubenswrapper[4835]: I0201 07:44:46.567518 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:44:46 crc kubenswrapper[4835]: I0201 07:44:46.567944 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:44:46 crc kubenswrapper[4835]: E0201 07:44:46.568482 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:44:57 crc kubenswrapper[4835]: I0201 07:44:57.574616 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:44:57 crc kubenswrapper[4835]: I0201 07:44:57.575253 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:44:57 crc kubenswrapper[4835]: E0201 07:44:57.575724 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:44:59 crc kubenswrapper[4835]: I0201 07:44:59.567216 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:44:59 crc kubenswrapper[4835]: I0201 07:44:59.567390 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:44:59 crc kubenswrapper[4835]: I0201 07:44:59.567670 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:44:59 crc kubenswrapper[4835]: E0201 07:44:59.568187 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.160622 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5"] Feb 01 07:45:00 crc kubenswrapper[4835]: E0201 07:45:00.161127 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" containerName="extract-utilities" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.161155 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" containerName="extract-utilities" Feb 01 07:45:00 crc kubenswrapper[4835]: E0201 07:45:00.161180 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" containerName="extract-content" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.161189 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" containerName="extract-content" Feb 01 07:45:00 crc kubenswrapper[4835]: E0201 07:45:00.161218 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" containerName="registry-server" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.161228 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" containerName="registry-server" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.161373 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b9e4f72-eb97-434b-aba4-ebf37ef1f51d" containerName="registry-server" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.162013 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.164712 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.164993 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.166821 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5"] Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.336263 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84654a3b-8db7-4ec6-950c-14bec7a98590-secret-volume\") pod \"collect-profiles-29498865-vm9z5\" (UID: \"84654a3b-8db7-4ec6-950c-14bec7a98590\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.336371 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b92zs\" (UniqueName: \"kubernetes.io/projected/84654a3b-8db7-4ec6-950c-14bec7a98590-kube-api-access-b92zs\") pod \"collect-profiles-29498865-vm9z5\" (UID: \"84654a3b-8db7-4ec6-950c-14bec7a98590\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.336428 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84654a3b-8db7-4ec6-950c-14bec7a98590-config-volume\") pod \"collect-profiles-29498865-vm9z5\" (UID: \"84654a3b-8db7-4ec6-950c-14bec7a98590\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.437691 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b92zs\" (UniqueName: \"kubernetes.io/projected/84654a3b-8db7-4ec6-950c-14bec7a98590-kube-api-access-b92zs\") pod \"collect-profiles-29498865-vm9z5\" (UID: \"84654a3b-8db7-4ec6-950c-14bec7a98590\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.437805 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84654a3b-8db7-4ec6-950c-14bec7a98590-config-volume\") pod \"collect-profiles-29498865-vm9z5\" (UID: \"84654a3b-8db7-4ec6-950c-14bec7a98590\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.437880 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84654a3b-8db7-4ec6-950c-14bec7a98590-secret-volume\") pod \"collect-profiles-29498865-vm9z5\" (UID: \"84654a3b-8db7-4ec6-950c-14bec7a98590\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.439875 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84654a3b-8db7-4ec6-950c-14bec7a98590-config-volume\") pod \"collect-profiles-29498865-vm9z5\" (UID: \"84654a3b-8db7-4ec6-950c-14bec7a98590\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.454219 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84654a3b-8db7-4ec6-950c-14bec7a98590-secret-volume\") pod \"collect-profiles-29498865-vm9z5\" (UID: \"84654a3b-8db7-4ec6-950c-14bec7a98590\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.460332 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b92zs\" (UniqueName: \"kubernetes.io/projected/84654a3b-8db7-4ec6-950c-14bec7a98590-kube-api-access-b92zs\") pod \"collect-profiles-29498865-vm9z5\" (UID: \"84654a3b-8db7-4ec6-950c-14bec7a98590\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.495276 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" Feb 01 07:45:00 crc kubenswrapper[4835]: I0201 07:45:00.809067 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5"] Feb 01 07:45:01 crc kubenswrapper[4835]: I0201 07:45:01.467856 4835 generic.go:334] "Generic (PLEG): container finished" podID="84654a3b-8db7-4ec6-950c-14bec7a98590" containerID="009401fd8cd37662006bfdceb1b612e942c8e587c0addc5645fd3075fa133198" exitCode=0 Feb 01 07:45:01 crc kubenswrapper[4835]: I0201 07:45:01.467952 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" event={"ID":"84654a3b-8db7-4ec6-950c-14bec7a98590","Type":"ContainerDied","Data":"009401fd8cd37662006bfdceb1b612e942c8e587c0addc5645fd3075fa133198"} Feb 01 07:45:01 crc kubenswrapper[4835]: I0201 07:45:01.468004 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" event={"ID":"84654a3b-8db7-4ec6-950c-14bec7a98590","Type":"ContainerStarted","Data":"8c95f7a05240cb9fddeb8f2d8bd71af84e0448e408020399d48d06806e40ec67"} Feb 01 07:45:02 crc kubenswrapper[4835]: I0201 07:45:02.775513 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" Feb 01 07:45:02 crc kubenswrapper[4835]: I0201 07:45:02.874908 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b92zs\" (UniqueName: \"kubernetes.io/projected/84654a3b-8db7-4ec6-950c-14bec7a98590-kube-api-access-b92zs\") pod \"84654a3b-8db7-4ec6-950c-14bec7a98590\" (UID: \"84654a3b-8db7-4ec6-950c-14bec7a98590\") " Feb 01 07:45:02 crc kubenswrapper[4835]: I0201 07:45:02.875134 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84654a3b-8db7-4ec6-950c-14bec7a98590-secret-volume\") pod \"84654a3b-8db7-4ec6-950c-14bec7a98590\" (UID: \"84654a3b-8db7-4ec6-950c-14bec7a98590\") " Feb 01 07:45:02 crc kubenswrapper[4835]: I0201 07:45:02.875181 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84654a3b-8db7-4ec6-950c-14bec7a98590-config-volume\") pod \"84654a3b-8db7-4ec6-950c-14bec7a98590\" (UID: \"84654a3b-8db7-4ec6-950c-14bec7a98590\") " Feb 01 07:45:02 crc kubenswrapper[4835]: I0201 07:45:02.875897 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84654a3b-8db7-4ec6-950c-14bec7a98590-config-volume" (OuterVolumeSpecName: "config-volume") pod "84654a3b-8db7-4ec6-950c-14bec7a98590" (UID: "84654a3b-8db7-4ec6-950c-14bec7a98590"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 07:45:02 crc kubenswrapper[4835]: I0201 07:45:02.881133 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84654a3b-8db7-4ec6-950c-14bec7a98590-kube-api-access-b92zs" (OuterVolumeSpecName: "kube-api-access-b92zs") pod "84654a3b-8db7-4ec6-950c-14bec7a98590" (UID: "84654a3b-8db7-4ec6-950c-14bec7a98590"). InnerVolumeSpecName "kube-api-access-b92zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:45:02 crc kubenswrapper[4835]: I0201 07:45:02.883552 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84654a3b-8db7-4ec6-950c-14bec7a98590-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "84654a3b-8db7-4ec6-950c-14bec7a98590" (UID: "84654a3b-8db7-4ec6-950c-14bec7a98590"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 07:45:02 crc kubenswrapper[4835]: I0201 07:45:02.977247 4835 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/84654a3b-8db7-4ec6-950c-14bec7a98590-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 01 07:45:02 crc kubenswrapper[4835]: I0201 07:45:02.977307 4835 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84654a3b-8db7-4ec6-950c-14bec7a98590-config-volume\") on node \"crc\" DevicePath \"\"" Feb 01 07:45:02 crc kubenswrapper[4835]: I0201 07:45:02.977330 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b92zs\" (UniqueName: \"kubernetes.io/projected/84654a3b-8db7-4ec6-950c-14bec7a98590-kube-api-access-b92zs\") on node \"crc\" DevicePath \"\"" Feb 01 07:45:03 crc kubenswrapper[4835]: I0201 07:45:03.483150 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" event={"ID":"84654a3b-8db7-4ec6-950c-14bec7a98590","Type":"ContainerDied","Data":"8c95f7a05240cb9fddeb8f2d8bd71af84e0448e408020399d48d06806e40ec67"} Feb 01 07:45:03 crc kubenswrapper[4835]: I0201 07:45:03.483196 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498865-vm9z5" Feb 01 07:45:03 crc kubenswrapper[4835]: I0201 07:45:03.483209 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c95f7a05240cb9fddeb8f2d8bd71af84e0448e408020399d48d06806e40ec67" Feb 01 07:45:12 crc kubenswrapper[4835]: I0201 07:45:12.566580 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:45:12 crc kubenswrapper[4835]: I0201 07:45:12.567655 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:45:12 crc kubenswrapper[4835]: E0201 07:45:12.568265 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:45:14 crc kubenswrapper[4835]: I0201 07:45:14.567985 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:45:14 crc kubenswrapper[4835]: I0201 07:45:14.568548 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:45:14 crc kubenswrapper[4835]: I0201 07:45:14.568738 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:45:14 crc kubenswrapper[4835]: E0201 07:45:14.569241 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:45:17 crc kubenswrapper[4835]: I0201 07:45:17.086769 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/root-account-create-update-gmb7x"] Feb 01 07:45:17 crc kubenswrapper[4835]: I0201 07:45:17.110455 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/root-account-create-update-gmb7x"] Feb 01 07:45:17 crc kubenswrapper[4835]: I0201 07:45:17.582217 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a95fd7f-8f31-420b-a847-e13f61aa0ce9" path="/var/lib/kubelet/pods/5a95fd7f-8f31-420b-a847-e13f61aa0ce9/volumes" Feb 01 07:45:23 crc kubenswrapper[4835]: I0201 07:45:23.567545 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:45:23 crc kubenswrapper[4835]: I0201 07:45:23.568391 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:45:23 crc kubenswrapper[4835]: E0201 07:45:23.785903 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:45:24 crc kubenswrapper[4835]: I0201 07:45:24.675954 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06"} Feb 01 07:45:24 crc kubenswrapper[4835]: I0201 07:45:24.676922 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:45:24 crc kubenswrapper[4835]: E0201 07:45:24.677288 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:45:24 crc kubenswrapper[4835]: I0201 07:45:24.677379 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:45:25 crc kubenswrapper[4835]: I0201 07:45:25.686448 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:45:25 crc kubenswrapper[4835]: E0201 07:45:25.686663 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:45:27 crc kubenswrapper[4835]: I0201 07:45:27.574957 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:45:27 crc kubenswrapper[4835]: I0201 07:45:27.575504 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:45:27 crc kubenswrapper[4835]: I0201 07:45:27.575702 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:45:27 crc kubenswrapper[4835]: E0201 07:45:27.576227 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:45:30 crc kubenswrapper[4835]: I0201 07:45:30.540090 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:45:32 crc kubenswrapper[4835]: I0201 07:45:32.537748 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:45:33 crc kubenswrapper[4835]: I0201 07:45:33.538157 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:45:36 crc kubenswrapper[4835]: I0201 07:45:36.537674 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:45:36 crc kubenswrapper[4835]: I0201 07:45:36.538097 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:45:36 crc kubenswrapper[4835]: I0201 07:45:36.538990 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:45:36 crc kubenswrapper[4835]: I0201 07:45:36.539022 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:45:36 crc kubenswrapper[4835]: I0201 07:45:36.539073 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" gracePeriod=30 Feb 01 07:45:36 crc kubenswrapper[4835]: I0201 07:45:36.541828 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:45:36 crc kubenswrapper[4835]: E0201 07:45:36.672273 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:45:36 crc kubenswrapper[4835]: I0201 07:45:36.788110 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" exitCode=0 Feb 01 07:45:36 crc kubenswrapper[4835]: I0201 07:45:36.788167 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06"} Feb 01 07:45:36 crc kubenswrapper[4835]: I0201 07:45:36.788248 4835 scope.go:117] "RemoveContainer" containerID="e12087426e89cee543b8bc60565e1133d597d4b3f677d7e08ccc0d24138d3809" Feb 01 07:45:36 crc kubenswrapper[4835]: I0201 07:45:36.789107 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:45:36 crc kubenswrapper[4835]: I0201 07:45:36.789185 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:45:36 crc kubenswrapper[4835]: E0201 07:45:36.789797 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:45:41 crc kubenswrapper[4835]: I0201 07:45:41.566735 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:45:41 crc kubenswrapper[4835]: I0201 07:45:41.567929 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:45:41 crc kubenswrapper[4835]: I0201 07:45:41.568114 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:45:41 crc kubenswrapper[4835]: E0201 07:45:41.568455 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:45:50 crc kubenswrapper[4835]: I0201 07:45:50.567228 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:45:50 crc kubenswrapper[4835]: I0201 07:45:50.567955 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:45:50 crc kubenswrapper[4835]: E0201 07:45:50.568357 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:45:56 crc kubenswrapper[4835]: I0201 07:45:56.567378 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:45:56 crc kubenswrapper[4835]: I0201 07:45:56.567848 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:45:56 crc kubenswrapper[4835]: I0201 07:45:56.567993 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:45:56 crc kubenswrapper[4835]: E0201 07:45:56.568316 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:46:00 crc kubenswrapper[4835]: E0201 07:46:00.967613 4835 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1edd7394_0f8e_4271_8774_f228946e62f3.slice/crio-675783f3860e44aa26dc702d2c9b79308d6ca04cb0bf0b461ea1c6f19635f2c4.scope\": RecentStats: unable to find data in memory cache]" Feb 01 07:46:01 crc kubenswrapper[4835]: I0201 07:46:01.031399 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="675783f3860e44aa26dc702d2c9b79308d6ca04cb0bf0b461ea1c6f19635f2c4" exitCode=1 Feb 01 07:46:01 crc kubenswrapper[4835]: I0201 07:46:01.031464 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"675783f3860e44aa26dc702d2c9b79308d6ca04cb0bf0b461ea1c6f19635f2c4"} Feb 01 07:46:01 crc kubenswrapper[4835]: I0201 07:46:01.031501 4835 scope.go:117] "RemoveContainer" containerID="fa5aff8be1093aa2c10f2b4af85287d1729e836661be58a64baa1c833802045c" Feb 01 07:46:01 crc kubenswrapper[4835]: I0201 07:46:01.032158 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:46:01 crc kubenswrapper[4835]: I0201 07:46:01.032214 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:46:01 crc kubenswrapper[4835]: I0201 07:46:01.032236 4835 scope.go:117] "RemoveContainer" containerID="675783f3860e44aa26dc702d2c9b79308d6ca04cb0bf0b461ea1c6f19635f2c4" Feb 01 07:46:01 crc kubenswrapper[4835]: I0201 07:46:01.032315 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:46:01 crc kubenswrapper[4835]: E0201 07:46:01.032662 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:46:05 crc kubenswrapper[4835]: I0201 07:46:05.568081 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:46:05 crc kubenswrapper[4835]: I0201 07:46:05.568620 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:46:05 crc kubenswrapper[4835]: E0201 07:46:05.568830 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:46:08 crc kubenswrapper[4835]: I0201 07:46:08.199447 4835 scope.go:117] "RemoveContainer" containerID="4150461df03e979f73af252c924d3235e5873da5e6ee9fff2b41bd3c4a7515a0" Feb 01 07:46:12 crc kubenswrapper[4835]: I0201 07:46:12.568379 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:46:12 crc kubenswrapper[4835]: I0201 07:46:12.569122 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:46:12 crc kubenswrapper[4835]: I0201 07:46:12.569169 4835 scope.go:117] "RemoveContainer" containerID="675783f3860e44aa26dc702d2c9b79308d6ca04cb0bf0b461ea1c6f19635f2c4" Feb 01 07:46:12 crc kubenswrapper[4835]: I0201 07:46:12.569293 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:46:12 crc kubenswrapper[4835]: E0201 07:46:12.569918 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:46:19 crc kubenswrapper[4835]: I0201 07:46:19.566746 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:46:19 crc kubenswrapper[4835]: I0201 07:46:19.567577 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:46:19 crc kubenswrapper[4835]: E0201 07:46:19.568193 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:46:25 crc kubenswrapper[4835]: I0201 07:46:25.191745 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:46:25 crc kubenswrapper[4835]: I0201 07:46:25.192434 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:46:25 crc kubenswrapper[4835]: I0201 07:46:25.567518 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:46:25 crc kubenswrapper[4835]: I0201 07:46:25.567653 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:46:25 crc kubenswrapper[4835]: I0201 07:46:25.567714 4835 scope.go:117] "RemoveContainer" containerID="675783f3860e44aa26dc702d2c9b79308d6ca04cb0bf0b461ea1c6f19635f2c4" Feb 01 07:46:25 crc kubenswrapper[4835]: I0201 07:46:25.567884 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:46:25 crc kubenswrapper[4835]: E0201 07:46:25.779793 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:46:26 crc kubenswrapper[4835]: I0201 07:46:26.329860 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"2ccacf7054750fc124e6d667a5b3a4fca74d9159c050ae51185ce7c6b495bbe6"} Feb 01 07:46:26 crc kubenswrapper[4835]: I0201 07:46:26.331236 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:46:26 crc kubenswrapper[4835]: I0201 07:46:26.331363 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:46:26 crc kubenswrapper[4835]: I0201 07:46:26.331672 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:46:26 crc kubenswrapper[4835]: E0201 07:46:26.332243 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:46:29 crc kubenswrapper[4835]: I0201 07:46:29.049844 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-db-create-m9js9"] Feb 01 07:46:29 crc kubenswrapper[4835]: I0201 07:46:29.060703 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-d22d-account-create-update-clkrg"] Feb 01 07:46:29 crc kubenswrapper[4835]: I0201 07:46:29.071792 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-db-create-m9js9"] Feb 01 07:46:29 crc kubenswrapper[4835]: I0201 07:46:29.082791 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-d22d-account-create-update-clkrg"] Feb 01 07:46:29 crc kubenswrapper[4835]: I0201 07:46:29.581250 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="766b4c0a-da92-4fe7-bf95-4a39f3fafafe" path="/var/lib/kubelet/pods/766b4c0a-da92-4fe7-bf95-4a39f3fafafe/volumes" Feb 01 07:46:29 crc kubenswrapper[4835]: I0201 07:46:29.582059 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f574f591-2220-4cd1-88f7-ac79ac332aae" path="/var/lib/kubelet/pods/f574f591-2220-4cd1-88f7-ac79ac332aae/volumes" Feb 01 07:46:30 crc kubenswrapper[4835]: I0201 07:46:30.567855 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:46:30 crc kubenswrapper[4835]: I0201 07:46:30.567899 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:46:30 crc kubenswrapper[4835]: E0201 07:46:30.568300 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:46:38 crc kubenswrapper[4835]: I0201 07:46:38.567277 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:46:38 crc kubenswrapper[4835]: I0201 07:46:38.567955 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:46:38 crc kubenswrapper[4835]: I0201 07:46:38.568071 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:46:38 crc kubenswrapper[4835]: E0201 07:46:38.568386 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:46:40 crc kubenswrapper[4835]: E0201 07:46:40.259782 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 07:46:40 crc kubenswrapper[4835]: I0201 07:46:40.462919 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:46:41 crc kubenswrapper[4835]: I0201 07:46:41.055750 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:46:41 crc kubenswrapper[4835]: E0201 07:46:41.055917 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:46:41 crc kubenswrapper[4835]: E0201 07:46:41.055974 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:48:43.05595433 +0000 UTC m=+1596.176390774 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:46:41 crc kubenswrapper[4835]: I0201 07:46:41.566970 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:46:41 crc kubenswrapper[4835]: I0201 07:46:41.567021 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:46:41 crc kubenswrapper[4835]: E0201 07:46:41.567588 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:46:49 crc kubenswrapper[4835]: I0201 07:46:49.046009 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-5w5sr"] Feb 01 07:46:49 crc kubenswrapper[4835]: I0201 07:46:49.058358 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-5w5sr"] Feb 01 07:46:49 crc kubenswrapper[4835]: I0201 07:46:49.579198 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd1d09a3-13ff-43c0-835a-de9a6f9b5103" path="/var/lib/kubelet/pods/cd1d09a3-13ff-43c0-835a-de9a6f9b5103/volumes" Feb 01 07:46:51 crc kubenswrapper[4835]: I0201 07:46:51.567361 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:46:51 crc kubenswrapper[4835]: I0201 07:46:51.567898 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:46:51 crc kubenswrapper[4835]: I0201 07:46:51.568095 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:46:51 crc kubenswrapper[4835]: E0201 07:46:51.568623 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:46:55 crc kubenswrapper[4835]: I0201 07:46:55.038016 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-6pjmn"] Feb 01 07:46:55 crc kubenswrapper[4835]: I0201 07:46:55.048222 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-6pjmn"] Feb 01 07:46:55 crc kubenswrapper[4835]: I0201 07:46:55.191940 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:46:55 crc kubenswrapper[4835]: I0201 07:46:55.192025 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:46:55 crc kubenswrapper[4835]: I0201 07:46:55.578981 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf026661-c9af-420a-8984-f7fbe212e592" path="/var/lib/kubelet/pods/bf026661-c9af-420a-8984-f7fbe212e592/volumes" Feb 01 07:46:56 crc kubenswrapper[4835]: I0201 07:46:56.566739 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:46:56 crc kubenswrapper[4835]: I0201 07:46:56.566766 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:46:56 crc kubenswrapper[4835]: E0201 07:46:56.567073 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:46:58 crc kubenswrapper[4835]: I0201 07:46:58.633361 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="a099e806e124b688716a90012a83109f2769650600cbbb38008ff999723edbe7" exitCode=1 Feb 01 07:46:58 crc kubenswrapper[4835]: I0201 07:46:58.633426 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"a099e806e124b688716a90012a83109f2769650600cbbb38008ff999723edbe7"} Feb 01 07:46:58 crc kubenswrapper[4835]: I0201 07:46:58.633876 4835 scope.go:117] "RemoveContainer" containerID="419ab68c1eadc99bff71a26d28334bac6306a91472d2659f54afabe19795872b" Feb 01 07:46:58 crc kubenswrapper[4835]: I0201 07:46:58.635010 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:46:58 crc kubenswrapper[4835]: I0201 07:46:58.635163 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:46:58 crc kubenswrapper[4835]: I0201 07:46:58.635317 4835 scope.go:117] "RemoveContainer" containerID="a099e806e124b688716a90012a83109f2769650600cbbb38008ff999723edbe7" Feb 01 07:46:58 crc kubenswrapper[4835]: I0201 07:46:58.635361 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:46:58 crc kubenswrapper[4835]: E0201 07:46:58.635962 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:47:07 crc kubenswrapper[4835]: I0201 07:47:07.574549 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:47:07 crc kubenswrapper[4835]: I0201 07:47:07.575160 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:47:07 crc kubenswrapper[4835]: E0201 07:47:07.575642 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:47:08 crc kubenswrapper[4835]: I0201 07:47:08.266257 4835 scope.go:117] "RemoveContainer" containerID="eabeabeae4f73ee57a400f521880f710c03aa93decaac629af5189bf021874a3" Feb 01 07:47:08 crc kubenswrapper[4835]: I0201 07:47:08.328837 4835 scope.go:117] "RemoveContainer" containerID="a06f9b42349fa2ea28d87918e953134cff78d85714b4da730fc4895d65231d70" Feb 01 07:47:08 crc kubenswrapper[4835]: I0201 07:47:08.376951 4835 scope.go:117] "RemoveContainer" containerID="fe725302a8ffa5be3e180ac6b253d15da455fbca578acdea4628b374a3cde003" Feb 01 07:47:08 crc kubenswrapper[4835]: I0201 07:47:08.402075 4835 scope.go:117] "RemoveContainer" containerID="215269eb271992c8cbc8e79c691e2434a7dce5223c9258cc1ad2fca20f897f92" Feb 01 07:47:12 crc kubenswrapper[4835]: I0201 07:47:12.567847 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:47:12 crc kubenswrapper[4835]: I0201 07:47:12.568318 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:47:12 crc kubenswrapper[4835]: I0201 07:47:12.568502 4835 scope.go:117] "RemoveContainer" containerID="a099e806e124b688716a90012a83109f2769650600cbbb38008ff999723edbe7" Feb 01 07:47:12 crc kubenswrapper[4835]: I0201 07:47:12.568518 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:47:12 crc kubenswrapper[4835]: E0201 07:47:12.569141 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:47:22 crc kubenswrapper[4835]: I0201 07:47:22.567003 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:47:22 crc kubenswrapper[4835]: I0201 07:47:22.567366 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:47:22 crc kubenswrapper[4835]: E0201 07:47:22.567776 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:47:24 crc kubenswrapper[4835]: I0201 07:47:24.568080 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:47:24 crc kubenswrapper[4835]: I0201 07:47:24.568660 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:47:24 crc kubenswrapper[4835]: I0201 07:47:24.568902 4835 scope.go:117] "RemoveContainer" containerID="a099e806e124b688716a90012a83109f2769650600cbbb38008ff999723edbe7" Feb 01 07:47:24 crc kubenswrapper[4835]: I0201 07:47:24.568927 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:47:24 crc kubenswrapper[4835]: E0201 07:47:24.569651 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:47:25 crc kubenswrapper[4835]: I0201 07:47:25.191611 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:47:25 crc kubenswrapper[4835]: I0201 07:47:25.191733 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:47:25 crc kubenswrapper[4835]: I0201 07:47:25.191813 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:47:25 crc kubenswrapper[4835]: I0201 07:47:25.192864 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e"} pod="openshift-machine-config-operator/machine-config-daemon-wdt78" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 07:47:25 crc kubenswrapper[4835]: I0201 07:47:25.192978 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" containerID="cri-o://1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" gracePeriod=600 Feb 01 07:47:25 crc kubenswrapper[4835]: E0201 07:47:25.332545 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:47:25 crc kubenswrapper[4835]: I0201 07:47:25.900539 4835 generic.go:334] "Generic (PLEG): container finished" podID="303c450e-4b2d-4908-84e6-df8b444ed640" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" exitCode=0 Feb 01 07:47:25 crc kubenswrapper[4835]: I0201 07:47:25.900609 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerDied","Data":"1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e"} Feb 01 07:47:25 crc kubenswrapper[4835]: I0201 07:47:25.900682 4835 scope.go:117] "RemoveContainer" containerID="a43725792d229350ec7471be026c4c547e893839692a410ac3e424adc0af5ced" Feb 01 07:47:25 crc kubenswrapper[4835]: I0201 07:47:25.901459 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:47:25 crc kubenswrapper[4835]: E0201 07:47:25.901876 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:47:29 crc kubenswrapper[4835]: I0201 07:47:29.059550 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-db-create-ddqhc"] Feb 01 07:47:29 crc kubenswrapper[4835]: I0201 07:47:29.067582 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv"] Feb 01 07:47:29 crc kubenswrapper[4835]: I0201 07:47:29.077071 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican-2ff5-account-create-update-9hbgv"] Feb 01 07:47:29 crc kubenswrapper[4835]: I0201 07:47:29.083500 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican-db-create-ddqhc"] Feb 01 07:47:29 crc kubenswrapper[4835]: I0201 07:47:29.584001 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26692abf-b5f8-4461-992d-508cb9b73bb2" path="/var/lib/kubelet/pods/26692abf-b5f8-4461-992d-508cb9b73bb2/volumes" Feb 01 07:47:29 crc kubenswrapper[4835]: I0201 07:47:29.585384 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="545f3a5d-c02e-45f2-aba5-ea50bf4fccd0" path="/var/lib/kubelet/pods/545f3a5d-c02e-45f2-aba5-ea50bf4fccd0/volumes" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.082986 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vxqbf"] Feb 01 07:47:30 crc kubenswrapper[4835]: E0201 07:47:30.083472 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84654a3b-8db7-4ec6-950c-14bec7a98590" containerName="collect-profiles" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.083494 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="84654a3b-8db7-4ec6-950c-14bec7a98590" containerName="collect-profiles" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.083777 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="84654a3b-8db7-4ec6-950c-14bec7a98590" containerName="collect-profiles" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.085791 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.098649 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxqbf"] Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.255314 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk8vs\" (UniqueName: \"kubernetes.io/projected/21d464c1-793a-4b74-af45-55a092004f64-kube-api-access-xk8vs\") pod \"redhat-marketplace-vxqbf\" (UID: \"21d464c1-793a-4b74-af45-55a092004f64\") " pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.255556 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21d464c1-793a-4b74-af45-55a092004f64-catalog-content\") pod \"redhat-marketplace-vxqbf\" (UID: \"21d464c1-793a-4b74-af45-55a092004f64\") " pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.255945 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21d464c1-793a-4b74-af45-55a092004f64-utilities\") pod \"redhat-marketplace-vxqbf\" (UID: \"21d464c1-793a-4b74-af45-55a092004f64\") " pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.357327 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21d464c1-793a-4b74-af45-55a092004f64-catalog-content\") pod \"redhat-marketplace-vxqbf\" (UID: \"21d464c1-793a-4b74-af45-55a092004f64\") " pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.357440 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21d464c1-793a-4b74-af45-55a092004f64-utilities\") pod \"redhat-marketplace-vxqbf\" (UID: \"21d464c1-793a-4b74-af45-55a092004f64\") " pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.357484 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk8vs\" (UniqueName: \"kubernetes.io/projected/21d464c1-793a-4b74-af45-55a092004f64-kube-api-access-xk8vs\") pod \"redhat-marketplace-vxqbf\" (UID: \"21d464c1-793a-4b74-af45-55a092004f64\") " pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.357944 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21d464c1-793a-4b74-af45-55a092004f64-catalog-content\") pod \"redhat-marketplace-vxqbf\" (UID: \"21d464c1-793a-4b74-af45-55a092004f64\") " pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.358185 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21d464c1-793a-4b74-af45-55a092004f64-utilities\") pod \"redhat-marketplace-vxqbf\" (UID: \"21d464c1-793a-4b74-af45-55a092004f64\") " pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.385947 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk8vs\" (UniqueName: \"kubernetes.io/projected/21d464c1-793a-4b74-af45-55a092004f64-kube-api-access-xk8vs\") pod \"redhat-marketplace-vxqbf\" (UID: \"21d464c1-793a-4b74-af45-55a092004f64\") " pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.427306 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.881983 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxqbf"] Feb 01 07:47:30 crc kubenswrapper[4835]: W0201 07:47:30.886124 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21d464c1_793a_4b74_af45_55a092004f64.slice/crio-41ddcd108b26975c17744312e6a174235fa49a681da173a14f0314dcfb971b52 WatchSource:0}: Error finding container 41ddcd108b26975c17744312e6a174235fa49a681da173a14f0314dcfb971b52: Status 404 returned error can't find the container with id 41ddcd108b26975c17744312e6a174235fa49a681da173a14f0314dcfb971b52 Feb 01 07:47:30 crc kubenswrapper[4835]: I0201 07:47:30.952294 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxqbf" event={"ID":"21d464c1-793a-4b74-af45-55a092004f64","Type":"ContainerStarted","Data":"41ddcd108b26975c17744312e6a174235fa49a681da173a14f0314dcfb971b52"} Feb 01 07:47:31 crc kubenswrapper[4835]: I0201 07:47:31.963523 4835 generic.go:334] "Generic (PLEG): container finished" podID="21d464c1-793a-4b74-af45-55a092004f64" containerID="d0b33215ad0e62d60917914f0f63caf2463ef81b6e6da2a1b081d40c7f29f7a6" exitCode=0 Feb 01 07:47:31 crc kubenswrapper[4835]: I0201 07:47:31.963585 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxqbf" event={"ID":"21d464c1-793a-4b74-af45-55a092004f64","Type":"ContainerDied","Data":"d0b33215ad0e62d60917914f0f63caf2463ef81b6e6da2a1b081d40c7f29f7a6"} Feb 01 07:47:31 crc kubenswrapper[4835]: I0201 07:47:31.966868 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 01 07:47:32 crc kubenswrapper[4835]: I0201 07:47:32.975666 4835 generic.go:334] "Generic (PLEG): container finished" podID="21d464c1-793a-4b74-af45-55a092004f64" containerID="274e101ce145aaf6c60956225d93a97345ee5026d78831612544982935b751de" exitCode=0 Feb 01 07:47:32 crc kubenswrapper[4835]: I0201 07:47:32.975813 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxqbf" event={"ID":"21d464c1-793a-4b74-af45-55a092004f64","Type":"ContainerDied","Data":"274e101ce145aaf6c60956225d93a97345ee5026d78831612544982935b751de"} Feb 01 07:47:33 crc kubenswrapper[4835]: I0201 07:47:33.989722 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxqbf" event={"ID":"21d464c1-793a-4b74-af45-55a092004f64","Type":"ContainerStarted","Data":"0b214a7dbc77af93f7227c887929c74bf835d3970806e935dc401fc10d1d5d5a"} Feb 01 07:47:34 crc kubenswrapper[4835]: I0201 07:47:34.566650 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:47:34 crc kubenswrapper[4835]: I0201 07:47:34.566698 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:47:34 crc kubenswrapper[4835]: E0201 07:47:34.567086 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:47:36 crc kubenswrapper[4835]: I0201 07:47:36.567853 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:47:36 crc kubenswrapper[4835]: I0201 07:47:36.568270 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:47:36 crc kubenswrapper[4835]: I0201 07:47:36.568386 4835 scope.go:117] "RemoveContainer" containerID="a099e806e124b688716a90012a83109f2769650600cbbb38008ff999723edbe7" Feb 01 07:47:36 crc kubenswrapper[4835]: I0201 07:47:36.568398 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:47:36 crc kubenswrapper[4835]: E0201 07:47:36.568772 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:47:40 crc kubenswrapper[4835]: I0201 07:47:40.428552 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:40 crc kubenswrapper[4835]: I0201 07:47:40.428978 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:40 crc kubenswrapper[4835]: I0201 07:47:40.539777 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:40 crc kubenswrapper[4835]: I0201 07:47:40.565610 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vxqbf" podStartSLOduration=9.142474894 podStartE2EDuration="10.565592744s" podCreationTimestamp="2026-02-01 07:47:30 +0000 UTC" firstStartedPulling="2026-02-01 07:47:31.966456788 +0000 UTC m=+1525.086893262" lastFinishedPulling="2026-02-01 07:47:33.389574648 +0000 UTC m=+1526.510011112" observedRunningTime="2026-02-01 07:47:34.018608569 +0000 UTC m=+1527.139045033" watchObservedRunningTime="2026-02-01 07:47:40.565592744 +0000 UTC m=+1533.686029188" Feb 01 07:47:40 crc kubenswrapper[4835]: I0201 07:47:40.567655 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:47:40 crc kubenswrapper[4835]: E0201 07:47:40.568080 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:47:41 crc kubenswrapper[4835]: I0201 07:47:41.114910 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:41 crc kubenswrapper[4835]: I0201 07:47:41.183382 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxqbf"] Feb 01 07:47:43 crc kubenswrapper[4835]: I0201 07:47:43.079631 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vxqbf" podUID="21d464c1-793a-4b74-af45-55a092004f64" containerName="registry-server" containerID="cri-o://0b214a7dbc77af93f7227c887929c74bf835d3970806e935dc401fc10d1d5d5a" gracePeriod=2 Feb 01 07:47:43 crc kubenswrapper[4835]: I0201 07:47:43.533386 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:43 crc kubenswrapper[4835]: I0201 07:47:43.696452 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21d464c1-793a-4b74-af45-55a092004f64-utilities\") pod \"21d464c1-793a-4b74-af45-55a092004f64\" (UID: \"21d464c1-793a-4b74-af45-55a092004f64\") " Feb 01 07:47:43 crc kubenswrapper[4835]: I0201 07:47:43.696572 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk8vs\" (UniqueName: \"kubernetes.io/projected/21d464c1-793a-4b74-af45-55a092004f64-kube-api-access-xk8vs\") pod \"21d464c1-793a-4b74-af45-55a092004f64\" (UID: \"21d464c1-793a-4b74-af45-55a092004f64\") " Feb 01 07:47:43 crc kubenswrapper[4835]: I0201 07:47:43.696681 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21d464c1-793a-4b74-af45-55a092004f64-catalog-content\") pod \"21d464c1-793a-4b74-af45-55a092004f64\" (UID: \"21d464c1-793a-4b74-af45-55a092004f64\") " Feb 01 07:47:43 crc kubenswrapper[4835]: I0201 07:47:43.697758 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21d464c1-793a-4b74-af45-55a092004f64-utilities" (OuterVolumeSpecName: "utilities") pod "21d464c1-793a-4b74-af45-55a092004f64" (UID: "21d464c1-793a-4b74-af45-55a092004f64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:47:43 crc kubenswrapper[4835]: I0201 07:47:43.699090 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21d464c1-793a-4b74-af45-55a092004f64-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:47:43 crc kubenswrapper[4835]: I0201 07:47:43.703567 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21d464c1-793a-4b74-af45-55a092004f64-kube-api-access-xk8vs" (OuterVolumeSpecName: "kube-api-access-xk8vs") pod "21d464c1-793a-4b74-af45-55a092004f64" (UID: "21d464c1-793a-4b74-af45-55a092004f64"). InnerVolumeSpecName "kube-api-access-xk8vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:47:43 crc kubenswrapper[4835]: I0201 07:47:43.724596 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21d464c1-793a-4b74-af45-55a092004f64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21d464c1-793a-4b74-af45-55a092004f64" (UID: "21d464c1-793a-4b74-af45-55a092004f64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:47:43 crc kubenswrapper[4835]: I0201 07:47:43.801081 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk8vs\" (UniqueName: \"kubernetes.io/projected/21d464c1-793a-4b74-af45-55a092004f64-kube-api-access-xk8vs\") on node \"crc\" DevicePath \"\"" Feb 01 07:47:43 crc kubenswrapper[4835]: I0201 07:47:43.801117 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21d464c1-793a-4b74-af45-55a092004f64-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.097397 4835 generic.go:334] "Generic (PLEG): container finished" podID="21d464c1-793a-4b74-af45-55a092004f64" containerID="0b214a7dbc77af93f7227c887929c74bf835d3970806e935dc401fc10d1d5d5a" exitCode=0 Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.097507 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxqbf" event={"ID":"21d464c1-793a-4b74-af45-55a092004f64","Type":"ContainerDied","Data":"0b214a7dbc77af93f7227c887929c74bf835d3970806e935dc401fc10d1d5d5a"} Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.097529 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxqbf" Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.097573 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxqbf" event={"ID":"21d464c1-793a-4b74-af45-55a092004f64","Type":"ContainerDied","Data":"41ddcd108b26975c17744312e6a174235fa49a681da173a14f0314dcfb971b52"} Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.097632 4835 scope.go:117] "RemoveContainer" containerID="0b214a7dbc77af93f7227c887929c74bf835d3970806e935dc401fc10d1d5d5a" Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.144308 4835 scope.go:117] "RemoveContainer" containerID="274e101ce145aaf6c60956225d93a97345ee5026d78831612544982935b751de" Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.161534 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxqbf"] Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.176532 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxqbf"] Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.183763 4835 scope.go:117] "RemoveContainer" containerID="d0b33215ad0e62d60917914f0f63caf2463ef81b6e6da2a1b081d40c7f29f7a6" Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.246125 4835 scope.go:117] "RemoveContainer" containerID="0b214a7dbc77af93f7227c887929c74bf835d3970806e935dc401fc10d1d5d5a" Feb 01 07:47:44 crc kubenswrapper[4835]: E0201 07:47:44.246759 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b214a7dbc77af93f7227c887929c74bf835d3970806e935dc401fc10d1d5d5a\": container with ID starting with 0b214a7dbc77af93f7227c887929c74bf835d3970806e935dc401fc10d1d5d5a not found: ID does not exist" containerID="0b214a7dbc77af93f7227c887929c74bf835d3970806e935dc401fc10d1d5d5a" Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.246872 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b214a7dbc77af93f7227c887929c74bf835d3970806e935dc401fc10d1d5d5a"} err="failed to get container status \"0b214a7dbc77af93f7227c887929c74bf835d3970806e935dc401fc10d1d5d5a\": rpc error: code = NotFound desc = could not find container \"0b214a7dbc77af93f7227c887929c74bf835d3970806e935dc401fc10d1d5d5a\": container with ID starting with 0b214a7dbc77af93f7227c887929c74bf835d3970806e935dc401fc10d1d5d5a not found: ID does not exist" Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.246926 4835 scope.go:117] "RemoveContainer" containerID="274e101ce145aaf6c60956225d93a97345ee5026d78831612544982935b751de" Feb 01 07:47:44 crc kubenswrapper[4835]: E0201 07:47:44.247265 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"274e101ce145aaf6c60956225d93a97345ee5026d78831612544982935b751de\": container with ID starting with 274e101ce145aaf6c60956225d93a97345ee5026d78831612544982935b751de not found: ID does not exist" containerID="274e101ce145aaf6c60956225d93a97345ee5026d78831612544982935b751de" Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.247324 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"274e101ce145aaf6c60956225d93a97345ee5026d78831612544982935b751de"} err="failed to get container status \"274e101ce145aaf6c60956225d93a97345ee5026d78831612544982935b751de\": rpc error: code = NotFound desc = could not find container \"274e101ce145aaf6c60956225d93a97345ee5026d78831612544982935b751de\": container with ID starting with 274e101ce145aaf6c60956225d93a97345ee5026d78831612544982935b751de not found: ID does not exist" Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.247357 4835 scope.go:117] "RemoveContainer" containerID="d0b33215ad0e62d60917914f0f63caf2463ef81b6e6da2a1b081d40c7f29f7a6" Feb 01 07:47:44 crc kubenswrapper[4835]: E0201 07:47:44.247683 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0b33215ad0e62d60917914f0f63caf2463ef81b6e6da2a1b081d40c7f29f7a6\": container with ID starting with d0b33215ad0e62d60917914f0f63caf2463ef81b6e6da2a1b081d40c7f29f7a6 not found: ID does not exist" containerID="d0b33215ad0e62d60917914f0f63caf2463ef81b6e6da2a1b081d40c7f29f7a6" Feb 01 07:47:44 crc kubenswrapper[4835]: I0201 07:47:44.247728 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b33215ad0e62d60917914f0f63caf2463ef81b6e6da2a1b081d40c7f29f7a6"} err="failed to get container status \"d0b33215ad0e62d60917914f0f63caf2463ef81b6e6da2a1b081d40c7f29f7a6\": rpc error: code = NotFound desc = could not find container \"d0b33215ad0e62d60917914f0f63caf2463ef81b6e6da2a1b081d40c7f29f7a6\": container with ID starting with d0b33215ad0e62d60917914f0f63caf2463ef81b6e6da2a1b081d40c7f29f7a6 not found: ID does not exist" Feb 01 07:47:45 crc kubenswrapper[4835]: I0201 07:47:45.600188 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21d464c1-793a-4b74-af45-55a092004f64" path="/var/lib/kubelet/pods/21d464c1-793a-4b74-af45-55a092004f64/volumes" Feb 01 07:47:47 crc kubenswrapper[4835]: I0201 07:47:47.575184 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:47:47 crc kubenswrapper[4835]: I0201 07:47:47.575675 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:47:47 crc kubenswrapper[4835]: E0201 07:47:47.576064 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:47:48 crc kubenswrapper[4835]: I0201 07:47:48.567633 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:47:48 crc kubenswrapper[4835]: I0201 07:47:48.567778 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:47:48 crc kubenswrapper[4835]: I0201 07:47:48.568142 4835 scope.go:117] "RemoveContainer" containerID="a099e806e124b688716a90012a83109f2769650600cbbb38008ff999723edbe7" Feb 01 07:47:48 crc kubenswrapper[4835]: I0201 07:47:48.568181 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:47:48 crc kubenswrapper[4835]: E0201 07:47:48.809082 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:47:49 crc kubenswrapper[4835]: I0201 07:47:49.157307 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerStarted","Data":"24c70f8e7a963f439f9a715dbf780d7f583dd8ae4f27ef3b92192f1f9ffc56ea"} Feb 01 07:47:49 crc kubenswrapper[4835]: I0201 07:47:49.158387 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:47:49 crc kubenswrapper[4835]: I0201 07:47:49.158592 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:47:49 crc kubenswrapper[4835]: I0201 07:47:49.158826 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:47:49 crc kubenswrapper[4835]: E0201 07:47:49.159629 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.508900 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-94kkf"] Feb 01 07:47:50 crc kubenswrapper[4835]: E0201 07:47:50.509362 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21d464c1-793a-4b74-af45-55a092004f64" containerName="extract-utilities" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.509380 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="21d464c1-793a-4b74-af45-55a092004f64" containerName="extract-utilities" Feb 01 07:47:50 crc kubenswrapper[4835]: E0201 07:47:50.509457 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21d464c1-793a-4b74-af45-55a092004f64" containerName="extract-content" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.509469 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="21d464c1-793a-4b74-af45-55a092004f64" containerName="extract-content" Feb 01 07:47:50 crc kubenswrapper[4835]: E0201 07:47:50.509484 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21d464c1-793a-4b74-af45-55a092004f64" containerName="registry-server" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.509492 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="21d464c1-793a-4b74-af45-55a092004f64" containerName="registry-server" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.509690 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="21d464c1-793a-4b74-af45-55a092004f64" containerName="registry-server" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.512757 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.521857 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-94kkf"] Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.626791 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01afbf3-db38-46a9-a5f4-bb290653ec52-utilities\") pod \"community-operators-94kkf\" (UID: \"b01afbf3-db38-46a9-a5f4-bb290653ec52\") " pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.626883 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgd9b\" (UniqueName: \"kubernetes.io/projected/b01afbf3-db38-46a9-a5f4-bb290653ec52-kube-api-access-hgd9b\") pod \"community-operators-94kkf\" (UID: \"b01afbf3-db38-46a9-a5f4-bb290653ec52\") " pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.627012 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01afbf3-db38-46a9-a5f4-bb290653ec52-catalog-content\") pod \"community-operators-94kkf\" (UID: \"b01afbf3-db38-46a9-a5f4-bb290653ec52\") " pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.728285 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01afbf3-db38-46a9-a5f4-bb290653ec52-catalog-content\") pod \"community-operators-94kkf\" (UID: \"b01afbf3-db38-46a9-a5f4-bb290653ec52\") " pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.728402 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01afbf3-db38-46a9-a5f4-bb290653ec52-utilities\") pod \"community-operators-94kkf\" (UID: \"b01afbf3-db38-46a9-a5f4-bb290653ec52\") " pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.728444 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgd9b\" (UniqueName: \"kubernetes.io/projected/b01afbf3-db38-46a9-a5f4-bb290653ec52-kube-api-access-hgd9b\") pod \"community-operators-94kkf\" (UID: \"b01afbf3-db38-46a9-a5f4-bb290653ec52\") " pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.728908 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01afbf3-db38-46a9-a5f4-bb290653ec52-utilities\") pod \"community-operators-94kkf\" (UID: \"b01afbf3-db38-46a9-a5f4-bb290653ec52\") " pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.729150 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01afbf3-db38-46a9-a5f4-bb290653ec52-catalog-content\") pod \"community-operators-94kkf\" (UID: \"b01afbf3-db38-46a9-a5f4-bb290653ec52\") " pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.760446 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgd9b\" (UniqueName: \"kubernetes.io/projected/b01afbf3-db38-46a9-a5f4-bb290653ec52-kube-api-access-hgd9b\") pod \"community-operators-94kkf\" (UID: \"b01afbf3-db38-46a9-a5f4-bb290653ec52\") " pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:47:50 crc kubenswrapper[4835]: I0201 07:47:50.882134 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:47:51 crc kubenswrapper[4835]: W0201 07:47:51.336602 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb01afbf3_db38_46a9_a5f4_bb290653ec52.slice/crio-c23bed94c7caa98a2b02390bf50f9b29f4693ad660c026f33aa2ba62a7995cb8 WatchSource:0}: Error finding container c23bed94c7caa98a2b02390bf50f9b29f4693ad660c026f33aa2ba62a7995cb8: Status 404 returned error can't find the container with id c23bed94c7caa98a2b02390bf50f9b29f4693ad660c026f33aa2ba62a7995cb8 Feb 01 07:47:51 crc kubenswrapper[4835]: I0201 07:47:51.337450 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-94kkf"] Feb 01 07:47:52 crc kubenswrapper[4835]: I0201 07:47:52.192275 4835 generic.go:334] "Generic (PLEG): container finished" podID="b01afbf3-db38-46a9-a5f4-bb290653ec52" containerID="33daf99d5db7449ff236f81b8af71e4b596e0372ee9b12dc6439d5ccd594150f" exitCode=0 Feb 01 07:47:52 crc kubenswrapper[4835]: I0201 07:47:52.192362 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94kkf" event={"ID":"b01afbf3-db38-46a9-a5f4-bb290653ec52","Type":"ContainerDied","Data":"33daf99d5db7449ff236f81b8af71e4b596e0372ee9b12dc6439d5ccd594150f"} Feb 01 07:47:52 crc kubenswrapper[4835]: I0201 07:47:52.192610 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94kkf" event={"ID":"b01afbf3-db38-46a9-a5f4-bb290653ec52","Type":"ContainerStarted","Data":"c23bed94c7caa98a2b02390bf50f9b29f4693ad660c026f33aa2ba62a7995cb8"} Feb 01 07:47:53 crc kubenswrapper[4835]: I0201 07:47:53.203882 4835 generic.go:334] "Generic (PLEG): container finished" podID="b01afbf3-db38-46a9-a5f4-bb290653ec52" containerID="0c4ae2249c85a7d04192d7222c6c481da472da58a1e3d5b3355c7af3f0d90fb4" exitCode=0 Feb 01 07:47:53 crc kubenswrapper[4835]: I0201 07:47:53.204126 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94kkf" event={"ID":"b01afbf3-db38-46a9-a5f4-bb290653ec52","Type":"ContainerDied","Data":"0c4ae2249c85a7d04192d7222c6c481da472da58a1e3d5b3355c7af3f0d90fb4"} Feb 01 07:47:54 crc kubenswrapper[4835]: I0201 07:47:54.214533 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94kkf" event={"ID":"b01afbf3-db38-46a9-a5f4-bb290653ec52","Type":"ContainerStarted","Data":"94cebea7ea948d0533aefe2045df48d5c9470af81441244e7a5b2e426243ea30"} Feb 01 07:47:54 crc kubenswrapper[4835]: I0201 07:47:54.237498 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-94kkf" podStartSLOduration=2.822507185 podStartE2EDuration="4.237471021s" podCreationTimestamp="2026-02-01 07:47:50 +0000 UTC" firstStartedPulling="2026-02-01 07:47:52.194861772 +0000 UTC m=+1545.315298236" lastFinishedPulling="2026-02-01 07:47:53.609825598 +0000 UTC m=+1546.730262072" observedRunningTime="2026-02-01 07:47:54.236172697 +0000 UTC m=+1547.356609151" watchObservedRunningTime="2026-02-01 07:47:54.237471021 +0000 UTC m=+1547.357907485" Feb 01 07:47:54 crc kubenswrapper[4835]: I0201 07:47:54.566868 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:47:54 crc kubenswrapper[4835]: E0201 07:47:54.567340 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:47:58 crc kubenswrapper[4835]: I0201 07:47:58.567060 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:47:58 crc kubenswrapper[4835]: I0201 07:47:58.567492 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:47:58 crc kubenswrapper[4835]: E0201 07:47:58.568005 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:48:00 crc kubenswrapper[4835]: I0201 07:48:00.882772 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:48:00 crc kubenswrapper[4835]: I0201 07:48:00.884253 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:48:00 crc kubenswrapper[4835]: I0201 07:48:00.963437 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:48:01 crc kubenswrapper[4835]: I0201 07:48:01.352590 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:48:01 crc kubenswrapper[4835]: I0201 07:48:01.424804 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-94kkf"] Feb 01 07:48:03 crc kubenswrapper[4835]: I0201 07:48:03.296445 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-94kkf" podUID="b01afbf3-db38-46a9-a5f4-bb290653ec52" containerName="registry-server" containerID="cri-o://94cebea7ea948d0533aefe2045df48d5c9470af81441244e7a5b2e426243ea30" gracePeriod=2 Feb 01 07:48:03 crc kubenswrapper[4835]: I0201 07:48:03.566938 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:48:03 crc kubenswrapper[4835]: I0201 07:48:03.567313 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:48:03 crc kubenswrapper[4835]: I0201 07:48:03.567462 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:48:03 crc kubenswrapper[4835]: E0201 07:48:03.567875 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:48:03 crc kubenswrapper[4835]: I0201 07:48:03.771603 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:48:03 crc kubenswrapper[4835]: I0201 07:48:03.842103 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01afbf3-db38-46a9-a5f4-bb290653ec52-utilities\") pod \"b01afbf3-db38-46a9-a5f4-bb290653ec52\" (UID: \"b01afbf3-db38-46a9-a5f4-bb290653ec52\") " Feb 01 07:48:03 crc kubenswrapper[4835]: I0201 07:48:03.842214 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01afbf3-db38-46a9-a5f4-bb290653ec52-catalog-content\") pod \"b01afbf3-db38-46a9-a5f4-bb290653ec52\" (UID: \"b01afbf3-db38-46a9-a5f4-bb290653ec52\") " Feb 01 07:48:03 crc kubenswrapper[4835]: I0201 07:48:03.842465 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgd9b\" (UniqueName: \"kubernetes.io/projected/b01afbf3-db38-46a9-a5f4-bb290653ec52-kube-api-access-hgd9b\") pod \"b01afbf3-db38-46a9-a5f4-bb290653ec52\" (UID: \"b01afbf3-db38-46a9-a5f4-bb290653ec52\") " Feb 01 07:48:03 crc kubenswrapper[4835]: I0201 07:48:03.844145 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b01afbf3-db38-46a9-a5f4-bb290653ec52-utilities" (OuterVolumeSpecName: "utilities") pod "b01afbf3-db38-46a9-a5f4-bb290653ec52" (UID: "b01afbf3-db38-46a9-a5f4-bb290653ec52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:48:03 crc kubenswrapper[4835]: I0201 07:48:03.853962 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b01afbf3-db38-46a9-a5f4-bb290653ec52-kube-api-access-hgd9b" (OuterVolumeSpecName: "kube-api-access-hgd9b") pod "b01afbf3-db38-46a9-a5f4-bb290653ec52" (UID: "b01afbf3-db38-46a9-a5f4-bb290653ec52"). InnerVolumeSpecName "kube-api-access-hgd9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:48:03 crc kubenswrapper[4835]: I0201 07:48:03.944548 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgd9b\" (UniqueName: \"kubernetes.io/projected/b01afbf3-db38-46a9-a5f4-bb290653ec52-kube-api-access-hgd9b\") on node \"crc\" DevicePath \"\"" Feb 01 07:48:03 crc kubenswrapper[4835]: I0201 07:48:03.944588 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b01afbf3-db38-46a9-a5f4-bb290653ec52-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.180962 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b01afbf3-db38-46a9-a5f4-bb290653ec52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b01afbf3-db38-46a9-a5f4-bb290653ec52" (UID: "b01afbf3-db38-46a9-a5f4-bb290653ec52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.248988 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b01afbf3-db38-46a9-a5f4-bb290653ec52-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.312806 4835 generic.go:334] "Generic (PLEG): container finished" podID="b01afbf3-db38-46a9-a5f4-bb290653ec52" containerID="94cebea7ea948d0533aefe2045df48d5c9470af81441244e7a5b2e426243ea30" exitCode=0 Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.312877 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94kkf" event={"ID":"b01afbf3-db38-46a9-a5f4-bb290653ec52","Type":"ContainerDied","Data":"94cebea7ea948d0533aefe2045df48d5c9470af81441244e7a5b2e426243ea30"} Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.312931 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94kkf" event={"ID":"b01afbf3-db38-46a9-a5f4-bb290653ec52","Type":"ContainerDied","Data":"c23bed94c7caa98a2b02390bf50f9b29f4693ad660c026f33aa2ba62a7995cb8"} Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.312964 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94kkf" Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.312982 4835 scope.go:117] "RemoveContainer" containerID="94cebea7ea948d0533aefe2045df48d5c9470af81441244e7a5b2e426243ea30" Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.352376 4835 scope.go:117] "RemoveContainer" containerID="0c4ae2249c85a7d04192d7222c6c481da472da58a1e3d5b3355c7af3f0d90fb4" Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.386277 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-94kkf"] Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.393888 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-94kkf"] Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.408167 4835 scope.go:117] "RemoveContainer" containerID="33daf99d5db7449ff236f81b8af71e4b596e0372ee9b12dc6439d5ccd594150f" Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.430669 4835 scope.go:117] "RemoveContainer" containerID="94cebea7ea948d0533aefe2045df48d5c9470af81441244e7a5b2e426243ea30" Feb 01 07:48:04 crc kubenswrapper[4835]: E0201 07:48:04.432285 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94cebea7ea948d0533aefe2045df48d5c9470af81441244e7a5b2e426243ea30\": container with ID starting with 94cebea7ea948d0533aefe2045df48d5c9470af81441244e7a5b2e426243ea30 not found: ID does not exist" containerID="94cebea7ea948d0533aefe2045df48d5c9470af81441244e7a5b2e426243ea30" Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.432373 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94cebea7ea948d0533aefe2045df48d5c9470af81441244e7a5b2e426243ea30"} err="failed to get container status \"94cebea7ea948d0533aefe2045df48d5c9470af81441244e7a5b2e426243ea30\": rpc error: code = NotFound desc = could not find container \"94cebea7ea948d0533aefe2045df48d5c9470af81441244e7a5b2e426243ea30\": container with ID starting with 94cebea7ea948d0533aefe2045df48d5c9470af81441244e7a5b2e426243ea30 not found: ID does not exist" Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.432449 4835 scope.go:117] "RemoveContainer" containerID="0c4ae2249c85a7d04192d7222c6c481da472da58a1e3d5b3355c7af3f0d90fb4" Feb 01 07:48:04 crc kubenswrapper[4835]: E0201 07:48:04.433027 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c4ae2249c85a7d04192d7222c6c481da472da58a1e3d5b3355c7af3f0d90fb4\": container with ID starting with 0c4ae2249c85a7d04192d7222c6c481da472da58a1e3d5b3355c7af3f0d90fb4 not found: ID does not exist" containerID="0c4ae2249c85a7d04192d7222c6c481da472da58a1e3d5b3355c7af3f0d90fb4" Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.433058 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c4ae2249c85a7d04192d7222c6c481da472da58a1e3d5b3355c7af3f0d90fb4"} err="failed to get container status \"0c4ae2249c85a7d04192d7222c6c481da472da58a1e3d5b3355c7af3f0d90fb4\": rpc error: code = NotFound desc = could not find container \"0c4ae2249c85a7d04192d7222c6c481da472da58a1e3d5b3355c7af3f0d90fb4\": container with ID starting with 0c4ae2249c85a7d04192d7222c6c481da472da58a1e3d5b3355c7af3f0d90fb4 not found: ID does not exist" Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.433078 4835 scope.go:117] "RemoveContainer" containerID="33daf99d5db7449ff236f81b8af71e4b596e0372ee9b12dc6439d5ccd594150f" Feb 01 07:48:04 crc kubenswrapper[4835]: E0201 07:48:04.433646 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33daf99d5db7449ff236f81b8af71e4b596e0372ee9b12dc6439d5ccd594150f\": container with ID starting with 33daf99d5db7449ff236f81b8af71e4b596e0372ee9b12dc6439d5ccd594150f not found: ID does not exist" containerID="33daf99d5db7449ff236f81b8af71e4b596e0372ee9b12dc6439d5ccd594150f" Feb 01 07:48:04 crc kubenswrapper[4835]: I0201 07:48:04.433670 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33daf99d5db7449ff236f81b8af71e4b596e0372ee9b12dc6439d5ccd594150f"} err="failed to get container status \"33daf99d5db7449ff236f81b8af71e4b596e0372ee9b12dc6439d5ccd594150f\": rpc error: code = NotFound desc = could not find container \"33daf99d5db7449ff236f81b8af71e4b596e0372ee9b12dc6439d5ccd594150f\": container with ID starting with 33daf99d5db7449ff236f81b8af71e4b596e0372ee9b12dc6439d5ccd594150f not found: ID does not exist" Feb 01 07:48:05 crc kubenswrapper[4835]: I0201 07:48:05.567951 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:48:05 crc kubenswrapper[4835]: E0201 07:48:05.568370 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:48:05 crc kubenswrapper[4835]: I0201 07:48:05.583841 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b01afbf3-db38-46a9-a5f4-bb290653ec52" path="/var/lib/kubelet/pods/b01afbf3-db38-46a9-a5f4-bb290653ec52/volumes" Feb 01 07:48:08 crc kubenswrapper[4835]: I0201 07:48:08.511802 4835 scope.go:117] "RemoveContainer" containerID="65cf85b1dd72d5635988e485f041129154e6406263a9f9918622bbd9bb651c81" Feb 01 07:48:08 crc kubenswrapper[4835]: I0201 07:48:08.536122 4835 scope.go:117] "RemoveContainer" containerID="212958e93fcbd8f3fdf3afad7d233490e91ef9f2cf2380e3ac58f8cc1722a0b6" Feb 01 07:48:13 crc kubenswrapper[4835]: I0201 07:48:13.567319 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:48:13 crc kubenswrapper[4835]: I0201 07:48:13.567361 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:48:13 crc kubenswrapper[4835]: E0201 07:48:13.567697 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:48:14 crc kubenswrapper[4835]: I0201 07:48:14.568130 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:48:14 crc kubenswrapper[4835]: I0201 07:48:14.568263 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:48:14 crc kubenswrapper[4835]: I0201 07:48:14.568470 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:48:14 crc kubenswrapper[4835]: E0201 07:48:14.568941 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:48:20 crc kubenswrapper[4835]: I0201 07:48:20.566487 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:48:20 crc kubenswrapper[4835]: E0201 07:48:20.567513 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:48:25 crc kubenswrapper[4835]: I0201 07:48:25.567722 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:48:25 crc kubenswrapper[4835]: I0201 07:48:25.568303 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:48:25 crc kubenswrapper[4835]: I0201 07:48:25.568446 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:48:25 crc kubenswrapper[4835]: E0201 07:48:25.568878 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(1edd7394-0f8e-4271-8774-f228946e62f3)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" Feb 01 07:48:27 crc kubenswrapper[4835]: I0201 07:48:27.576956 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:48:27 crc kubenswrapper[4835]: I0201 07:48:27.577351 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:48:27 crc kubenswrapper[4835]: E0201 07:48:27.577779 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.259877 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.260234 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-server" containerID="cri-o://abaae4399d0309909ee61f1119476fc6ca124d2a5861328d8b9f177c3ee8d541" gracePeriod=30 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.260367 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-replicator" containerID="cri-o://57f650c2bf61220733002708c6de1b1f0b9bedf1608f819556e91bcbf73a479c" gracePeriod=30 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.260333 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-auditor" containerID="cri-o://115bbc64e704d41ae4244ee3df9b13e55015920e53f212f314acf31071b2bf14" gracePeriod=30 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.260401 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" containerID="cri-o://24c70f8e7a963f439f9a715dbf780d7f583dd8ae4f27ef3b92192f1f9ffc56ea" gracePeriod=30 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.260452 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-reaper" containerID="cri-o://c9e3d55dd0fa17eedf107eb2b3e5dac364ff8077e8a1d4e0d9016998e9e14b2a" gracePeriod=30 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.260389 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="rsync" containerID="cri-o://1244aa8579be5d9284ebc00671702c6922c1ee0c32324cc3fb026ab5c3634876" gracePeriod=30 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.260494 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-server" containerID="cri-o://e1ae71b74256ecedefc7fbf253c43d8171b47774a342cb3954c7d0625c83ceb4" gracePeriod=30 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.260505 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-auditor" containerID="cri-o://c677208601eec0c0fae2c620f112d3a005a89800a130f6a2742cfc65c7caf407" gracePeriod=30 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.260444 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-server" containerID="cri-o://eb8a3ffd071b9c2b3f1584e981522df172dcb88a198689e7934e8735ecf4b50a" gracePeriod=30 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.260538 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-auditor" containerID="cri-o://3f92566bd67947d9babfc2464c78a74c7f787b215d8cc4f97cb5e94b3c298f10" gracePeriod=30 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.260342 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="swift-recon-cron" containerID="cri-o://c2bb2c50979d81b48db3da8d1503421df516cf45c6cb8eddcab8d29e7b89e40b" gracePeriod=30 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.260593 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-updater" containerID="cri-o://2ccacf7054750fc124e6d667a5b3a4fca74d9159c050ae51185ce7c6b495bbe6" gracePeriod=30 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546023 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="24c70f8e7a963f439f9a715dbf780d7f583dd8ae4f27ef3b92192f1f9ffc56ea" exitCode=0 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546376 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="2ccacf7054750fc124e6d667a5b3a4fca74d9159c050ae51185ce7c6b495bbe6" exitCode=0 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546386 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="115bbc64e704d41ae4244ee3df9b13e55015920e53f212f314acf31071b2bf14" exitCode=0 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546393 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="57f650c2bf61220733002708c6de1b1f0b9bedf1608f819556e91bcbf73a479c" exitCode=0 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546183 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"24c70f8e7a963f439f9a715dbf780d7f583dd8ae4f27ef3b92192f1f9ffc56ea"} Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546518 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"2ccacf7054750fc124e6d667a5b3a4fca74d9159c050ae51185ce7c6b495bbe6"} Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546399 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="3f92566bd67947d9babfc2464c78a74c7f787b215d8cc4f97cb5e94b3c298f10" exitCode=0 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546563 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"115bbc64e704d41ae4244ee3df9b13e55015920e53f212f314acf31071b2bf14"} Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546590 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"57f650c2bf61220733002708c6de1b1f0b9bedf1608f819556e91bcbf73a479c"} Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546589 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="eb8a3ffd071b9c2b3f1584e981522df172dcb88a198689e7934e8735ecf4b50a" exitCode=0 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546617 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="c9e3d55dd0fa17eedf107eb2b3e5dac364ff8077e8a1d4e0d9016998e9e14b2a" exitCode=0 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546618 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"3f92566bd67947d9babfc2464c78a74c7f787b215d8cc4f97cb5e94b3c298f10"} Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546631 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="c677208601eec0c0fae2c620f112d3a005a89800a130f6a2742cfc65c7caf407" exitCode=0 Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546644 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"eb8a3ffd071b9c2b3f1584e981522df172dcb88a198689e7934e8735ecf4b50a"} Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546669 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"c9e3d55dd0fa17eedf107eb2b3e5dac364ff8077e8a1d4e0d9016998e9e14b2a"} Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546693 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"c677208601eec0c0fae2c620f112d3a005a89800a130f6a2742cfc65c7caf407"} Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.546597 4835 scope.go:117] "RemoveContainer" containerID="a099e806e124b688716a90012a83109f2769650600cbbb38008ff999723edbe7" Feb 01 07:48:29 crc kubenswrapper[4835]: I0201 07:48:29.607341 4835 scope.go:117] "RemoveContainer" containerID="675783f3860e44aa26dc702d2c9b79308d6ca04cb0bf0b461ea1c6f19635f2c4" Feb 01 07:48:30 crc kubenswrapper[4835]: I0201 07:48:30.566644 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="1244aa8579be5d9284ebc00671702c6922c1ee0c32324cc3fb026ab5c3634876" exitCode=0 Feb 01 07:48:30 crc kubenswrapper[4835]: I0201 07:48:30.566706 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="e1ae71b74256ecedefc7fbf253c43d8171b47774a342cb3954c7d0625c83ceb4" exitCode=0 Feb 01 07:48:30 crc kubenswrapper[4835]: I0201 07:48:30.566726 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="abaae4399d0309909ee61f1119476fc6ca124d2a5861328d8b9f177c3ee8d541" exitCode=0 Feb 01 07:48:30 crc kubenswrapper[4835]: I0201 07:48:30.566724 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"1244aa8579be5d9284ebc00671702c6922c1ee0c32324cc3fb026ab5c3634876"} Feb 01 07:48:30 crc kubenswrapper[4835]: I0201 07:48:30.566787 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"e1ae71b74256ecedefc7fbf253c43d8171b47774a342cb3954c7d0625c83ceb4"} Feb 01 07:48:30 crc kubenswrapper[4835]: I0201 07:48:30.566816 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"abaae4399d0309909ee61f1119476fc6ca124d2a5861328d8b9f177c3ee8d541"} Feb 01 07:48:35 crc kubenswrapper[4835]: I0201 07:48:35.567768 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:48:35 crc kubenswrapper[4835]: E0201 07:48:35.569002 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:48:42 crc kubenswrapper[4835]: I0201 07:48:42.566806 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:48:42 crc kubenswrapper[4835]: I0201 07:48:42.567293 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:48:42 crc kubenswrapper[4835]: E0201 07:48:42.567565 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:48:43 crc kubenswrapper[4835]: I0201 07:48:43.062434 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:48:43 crc kubenswrapper[4835]: E0201 07:48:43.062876 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:48:43 crc kubenswrapper[4835]: E0201 07:48:43.062933 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:50:45.062912622 +0000 UTC m=+1718.183349056 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:48:43 crc kubenswrapper[4835]: E0201 07:48:43.464534 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 07:48:43 crc kubenswrapper[4835]: I0201 07:48:43.697634 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:48:46 crc kubenswrapper[4835]: I0201 07:48:46.567349 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:48:46 crc kubenswrapper[4835]: E0201 07:48:46.568267 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:48:53 crc kubenswrapper[4835]: I0201 07:48:53.566802 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:48:53 crc kubenswrapper[4835]: I0201 07:48:53.567303 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:48:53 crc kubenswrapper[4835]: E0201 07:48:53.567548 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.630527 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p"] Feb 01 07:48:54 crc kubenswrapper[4835]: E0201 07:48:54.630844 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b01afbf3-db38-46a9-a5f4-bb290653ec52" containerName="extract-utilities" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.630859 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b01afbf3-db38-46a9-a5f4-bb290653ec52" containerName="extract-utilities" Feb 01 07:48:54 crc kubenswrapper[4835]: E0201 07:48:54.630888 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b01afbf3-db38-46a9-a5f4-bb290653ec52" containerName="registry-server" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.630898 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b01afbf3-db38-46a9-a5f4-bb290653ec52" containerName="registry-server" Feb 01 07:48:54 crc kubenswrapper[4835]: E0201 07:48:54.630926 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b01afbf3-db38-46a9-a5f4-bb290653ec52" containerName="extract-content" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.630935 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b01afbf3-db38-46a9-a5f4-bb290653ec52" containerName="extract-content" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.631115 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b01afbf3-db38-46a9-a5f4-bb290653ec52" containerName="registry-server" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.632043 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.653152 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p"] Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.729190 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0449d2d9-ddcc-4eaa-84b1-9095448105f5-run-httpd\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.729246 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0449d2d9-ddcc-4eaa-84b1-9095448105f5-log-httpd\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.729280 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0449d2d9-ddcc-4eaa-84b1-9095448105f5-config-data\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.729410 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntmxx\" (UniqueName: \"kubernetes.io/projected/0449d2d9-ddcc-4eaa-84b1-9095448105f5-kube-api-access-ntmxx\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.729572 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0449d2d9-ddcc-4eaa-84b1-9095448105f5-etc-swift\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.830713 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntmxx\" (UniqueName: \"kubernetes.io/projected/0449d2d9-ddcc-4eaa-84b1-9095448105f5-kube-api-access-ntmxx\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.830820 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0449d2d9-ddcc-4eaa-84b1-9095448105f5-etc-swift\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.830876 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0449d2d9-ddcc-4eaa-84b1-9095448105f5-run-httpd\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.830903 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0449d2d9-ddcc-4eaa-84b1-9095448105f5-log-httpd\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.830922 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0449d2d9-ddcc-4eaa-84b1-9095448105f5-config-data\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.831369 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0449d2d9-ddcc-4eaa-84b1-9095448105f5-log-httpd\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.831594 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0449d2d9-ddcc-4eaa-84b1-9095448105f5-run-httpd\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.837960 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0449d2d9-ddcc-4eaa-84b1-9095448105f5-config-data\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.846716 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0449d2d9-ddcc-4eaa-84b1-9095448105f5-etc-swift\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:54 crc kubenswrapper[4835]: I0201 07:48:54.864553 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntmxx\" (UniqueName: \"kubernetes.io/projected/0449d2d9-ddcc-4eaa-84b1-9095448105f5-kube-api-access-ntmxx\") pod \"swift-proxy-6c7f677bc9-lq29p\" (UID: \"0449d2d9-ddcc-4eaa-84b1-9095448105f5\") " pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:55 crc kubenswrapper[4835]: I0201 07:48:55.018613 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:55 crc kubenswrapper[4835]: I0201 07:48:55.330150 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p"] Feb 01 07:48:55 crc kubenswrapper[4835]: W0201 07:48:55.346570 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0449d2d9_ddcc_4eaa_84b1_9095448105f5.slice/crio-149a7db6d3e3e367fac7d873cfee0becf2c8c9c52da7d468a7fef2e1cec7a233 WatchSource:0}: Error finding container 149a7db6d3e3e367fac7d873cfee0becf2c8c9c52da7d468a7fef2e1cec7a233: Status 404 returned error can't find the container with id 149a7db6d3e3e367fac7d873cfee0becf2c8c9c52da7d468a7fef2e1cec7a233 Feb 01 07:48:55 crc kubenswrapper[4835]: I0201 07:48:55.801658 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"1838599b5d9bc0829100b3d6f15b7c7c33c2ec97bcdc55704c4ebbde697b911e"} Feb 01 07:48:55 crc kubenswrapper[4835]: I0201 07:48:55.801757 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"397c73b00c04db01df3e8a36434377b8f8e589ca9c6353eeef20c5573cf758fc"} Feb 01 07:48:55 crc kubenswrapper[4835]: I0201 07:48:55.801778 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"149a7db6d3e3e367fac7d873cfee0becf2c8c9c52da7d468a7fef2e1cec7a233"} Feb 01 07:48:55 crc kubenswrapper[4835]: I0201 07:48:55.801838 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:55 crc kubenswrapper[4835]: I0201 07:48:55.829944 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podStartSLOduration=1.829913235 podStartE2EDuration="1.829913235s" podCreationTimestamp="2026-02-01 07:48:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 07:48:55.828232151 +0000 UTC m=+1608.948668605" watchObservedRunningTime="2026-02-01 07:48:55.829913235 +0000 UTC m=+1608.950349669" Feb 01 07:48:56 crc kubenswrapper[4835]: I0201 07:48:56.817659 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"1838599b5d9bc0829100b3d6f15b7c7c33c2ec97bcdc55704c4ebbde697b911e"} Feb 01 07:48:56 crc kubenswrapper[4835]: I0201 07:48:56.818067 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:56 crc kubenswrapper[4835]: I0201 07:48:56.818301 4835 scope.go:117] "RemoveContainer" containerID="1838599b5d9bc0829100b3d6f15b7c7c33c2ec97bcdc55704c4ebbde697b911e" Feb 01 07:48:56 crc kubenswrapper[4835]: I0201 07:48:56.817478 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="1838599b5d9bc0829100b3d6f15b7c7c33c2ec97bcdc55704c4ebbde697b911e" exitCode=1 Feb 01 07:48:57 crc kubenswrapper[4835]: I0201 07:48:57.839113 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"5f562129e4e7a937bc85ef18cd0fc52c647af4abebeb9eed500135118d5fd888"} Feb 01 07:48:57 crc kubenswrapper[4835]: I0201 07:48:57.839826 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:48:57 crc kubenswrapper[4835]: I0201 07:48:57.849210 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xgrp2"] Feb 01 07:48:57 crc kubenswrapper[4835]: I0201 07:48:57.850615 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:48:57 crc kubenswrapper[4835]: I0201 07:48:57.856174 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xgrp2"] Feb 01 07:48:57 crc kubenswrapper[4835]: I0201 07:48:57.896514 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/952a92f0-8bd4-4aa9-b437-af019f748380-utilities\") pod \"certified-operators-xgrp2\" (UID: \"952a92f0-8bd4-4aa9-b437-af019f748380\") " pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:48:57 crc kubenswrapper[4835]: I0201 07:48:57.896573 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds6g4\" (UniqueName: \"kubernetes.io/projected/952a92f0-8bd4-4aa9-b437-af019f748380-kube-api-access-ds6g4\") pod \"certified-operators-xgrp2\" (UID: \"952a92f0-8bd4-4aa9-b437-af019f748380\") " pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:48:57 crc kubenswrapper[4835]: I0201 07:48:57.896620 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/952a92f0-8bd4-4aa9-b437-af019f748380-catalog-content\") pod \"certified-operators-xgrp2\" (UID: \"952a92f0-8bd4-4aa9-b437-af019f748380\") " pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:48:57 crc kubenswrapper[4835]: I0201 07:48:57.997663 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds6g4\" (UniqueName: \"kubernetes.io/projected/952a92f0-8bd4-4aa9-b437-af019f748380-kube-api-access-ds6g4\") pod \"certified-operators-xgrp2\" (UID: \"952a92f0-8bd4-4aa9-b437-af019f748380\") " pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:48:57 crc kubenswrapper[4835]: I0201 07:48:57.998035 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/952a92f0-8bd4-4aa9-b437-af019f748380-catalog-content\") pod \"certified-operators-xgrp2\" (UID: \"952a92f0-8bd4-4aa9-b437-af019f748380\") " pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:48:57 crc kubenswrapper[4835]: I0201 07:48:57.998311 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/952a92f0-8bd4-4aa9-b437-af019f748380-utilities\") pod \"certified-operators-xgrp2\" (UID: \"952a92f0-8bd4-4aa9-b437-af019f748380\") " pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:48:57 crc kubenswrapper[4835]: I0201 07:48:57.998582 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/952a92f0-8bd4-4aa9-b437-af019f748380-catalog-content\") pod \"certified-operators-xgrp2\" (UID: \"952a92f0-8bd4-4aa9-b437-af019f748380\") " pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:48:57 crc kubenswrapper[4835]: I0201 07:48:57.999379 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/952a92f0-8bd4-4aa9-b437-af019f748380-utilities\") pod \"certified-operators-xgrp2\" (UID: \"952a92f0-8bd4-4aa9-b437-af019f748380\") " pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:48:58 crc kubenswrapper[4835]: I0201 07:48:58.040890 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds6g4\" (UniqueName: \"kubernetes.io/projected/952a92f0-8bd4-4aa9-b437-af019f748380-kube-api-access-ds6g4\") pod \"certified-operators-xgrp2\" (UID: \"952a92f0-8bd4-4aa9-b437-af019f748380\") " pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:48:58 crc kubenswrapper[4835]: I0201 07:48:58.172678 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:48:58 crc kubenswrapper[4835]: I0201 07:48:58.658526 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xgrp2"] Feb 01 07:48:58 crc kubenswrapper[4835]: I0201 07:48:58.850489 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="5f562129e4e7a937bc85ef18cd0fc52c647af4abebeb9eed500135118d5fd888" exitCode=1 Feb 01 07:48:58 crc kubenswrapper[4835]: I0201 07:48:58.850559 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"5f562129e4e7a937bc85ef18cd0fc52c647af4abebeb9eed500135118d5fd888"} Feb 01 07:48:58 crc kubenswrapper[4835]: I0201 07:48:58.850590 4835 scope.go:117] "RemoveContainer" containerID="1838599b5d9bc0829100b3d6f15b7c7c33c2ec97bcdc55704c4ebbde697b911e" Feb 01 07:48:58 crc kubenswrapper[4835]: I0201 07:48:58.851502 4835 scope.go:117] "RemoveContainer" containerID="5f562129e4e7a937bc85ef18cd0fc52c647af4abebeb9eed500135118d5fd888" Feb 01 07:48:58 crc kubenswrapper[4835]: E0201 07:48:58.851994 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:48:58 crc kubenswrapper[4835]: I0201 07:48:58.854355 4835 generic.go:334] "Generic (PLEG): container finished" podID="952a92f0-8bd4-4aa9-b437-af019f748380" containerID="811b21cb733038396715d36077fb049854b3757f440863dbcefa75a9320e20ee" exitCode=0 Feb 01 07:48:58 crc kubenswrapper[4835]: I0201 07:48:58.854416 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgrp2" event={"ID":"952a92f0-8bd4-4aa9-b437-af019f748380","Type":"ContainerDied","Data":"811b21cb733038396715d36077fb049854b3757f440863dbcefa75a9320e20ee"} Feb 01 07:48:58 crc kubenswrapper[4835]: I0201 07:48:58.854491 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgrp2" event={"ID":"952a92f0-8bd4-4aa9-b437-af019f748380","Type":"ContainerStarted","Data":"531b752c0353bd0cf7d0d623b4ef2f05ab183ae8b42ef50855bcea2f7ac14cc4"} Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.567211 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:48:59 crc kubenswrapper[4835]: E0201 07:48:59.567980 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.722446 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.838845 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"1edd7394-0f8e-4271-8774-f228946e62f3\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.839030 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift\") pod \"1edd7394-0f8e-4271-8774-f228946e62f3\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.839065 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1edd7394-0f8e-4271-8774-f228946e62f3-lock\") pod \"1edd7394-0f8e-4271-8774-f228946e62f3\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.839098 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt6t9\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-kube-api-access-wt6t9\") pod \"1edd7394-0f8e-4271-8774-f228946e62f3\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.839126 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1edd7394-0f8e-4271-8774-f228946e62f3-cache\") pod \"1edd7394-0f8e-4271-8774-f228946e62f3\" (UID: \"1edd7394-0f8e-4271-8774-f228946e62f3\") " Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.839827 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1edd7394-0f8e-4271-8774-f228946e62f3-lock" (OuterVolumeSpecName: "lock") pod "1edd7394-0f8e-4271-8774-f228946e62f3" (UID: "1edd7394-0f8e-4271-8774-f228946e62f3"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.840036 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1edd7394-0f8e-4271-8774-f228946e62f3-cache" (OuterVolumeSpecName: "cache") pod "1edd7394-0f8e-4271-8774-f228946e62f3" (UID: "1edd7394-0f8e-4271-8774-f228946e62f3"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.843537 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "swift") pod "1edd7394-0f8e-4271-8774-f228946e62f3" (UID: "1edd7394-0f8e-4271-8774-f228946e62f3"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.843863 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-kube-api-access-wt6t9" (OuterVolumeSpecName: "kube-api-access-wt6t9") pod "1edd7394-0f8e-4271-8774-f228946e62f3" (UID: "1edd7394-0f8e-4271-8774-f228946e62f3"). InnerVolumeSpecName "kube-api-access-wt6t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.843920 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "1edd7394-0f8e-4271-8774-f228946e62f3" (UID: "1edd7394-0f8e-4271-8774-f228946e62f3"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.870152 4835 generic.go:334] "Generic (PLEG): container finished" podID="1edd7394-0f8e-4271-8774-f228946e62f3" containerID="c2bb2c50979d81b48db3da8d1503421df516cf45c6cb8eddcab8d29e7b89e40b" exitCode=137 Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.870285 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.870308 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"c2bb2c50979d81b48db3da8d1503421df516cf45c6cb8eddcab8d29e7b89e40b"} Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.874851 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1edd7394-0f8e-4271-8774-f228946e62f3","Type":"ContainerDied","Data":"965930581ebfe6a06bce16c42d9dbc0702e4b9210c5c9c9057f64d28fcd26803"} Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.874877 4835 scope.go:117] "RemoveContainer" containerID="24c70f8e7a963f439f9a715dbf780d7f583dd8ae4f27ef3b92192f1f9ffc56ea" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.878880 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgrp2" event={"ID":"952a92f0-8bd4-4aa9-b437-af019f748380","Type":"ContainerStarted","Data":"6ee6f079615b79438f153e72935713c6df9931ac3842f4e28427eae45b23e997"} Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.890145 4835 scope.go:117] "RemoveContainer" containerID="5f562129e4e7a937bc85ef18cd0fc52c647af4abebeb9eed500135118d5fd888" Feb 01 07:48:59 crc kubenswrapper[4835]: E0201 07:48:59.890617 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.895108 4835 scope.go:117] "RemoveContainer" containerID="2ccacf7054750fc124e6d667a5b3a4fca74d9159c050ae51185ce7c6b495bbe6" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.941132 4835 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1edd7394-0f8e-4271-8774-f228946e62f3-lock\") on node \"crc\" DevicePath \"\"" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.941181 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt6t9\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-kube-api-access-wt6t9\") on node \"crc\" DevicePath \"\"" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.941199 4835 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1edd7394-0f8e-4271-8774-f228946e62f3-cache\") on node \"crc\" DevicePath \"\"" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.941244 4835 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.941331 4835 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1edd7394-0f8e-4271-8774-f228946e62f3-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.942551 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.959132 4835 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.963551 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.964861 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.968841 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Feb 01 07:48:59 crc kubenswrapper[4835]: I0201 07:48:59.986706 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013086 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013390 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013413 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013441 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-auditor" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013450 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-auditor" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013466 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013474 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013485 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013494 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013504 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-auditor" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013512 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-auditor" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013526 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="swift-recon-cron" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013534 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="swift-recon-cron" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013547 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013557 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013569 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013579 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013588 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013597 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013608 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-auditor" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013616 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-auditor" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013629 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013637 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013646 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-server" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013654 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-server" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013666 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013674 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013686 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013694 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013707 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013715 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013727 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013735 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013747 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013757 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013767 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013775 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013788 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-reaper" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013797 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-reaper" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013807 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013814 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013828 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013836 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013847 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013855 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013865 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013873 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013883 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013892 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013908 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013917 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013928 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013936 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013950 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-server" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013958 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-server" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.013974 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-server" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.013999 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-server" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.014009 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014017 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.014027 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014035 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.014045 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="rsync" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014053 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="rsync" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.014067 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014075 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014223 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014250 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014260 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014270 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014282 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014293 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014304 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014314 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-server" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014328 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014340 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="swift-recon-cron" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014349 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014359 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-auditor" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014369 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-server" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014379 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014390 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014401 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-server" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014418 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014951 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014965 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014978 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-reaper" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.014992 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015006 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015040 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015052 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-auditor" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015060 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015070 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015080 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015092 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015102 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-auditor" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015115 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015124 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015132 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015147 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015155 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="rsync" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.015326 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015337 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.015348 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015356 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.015372 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015380 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.015394 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015402 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.015414 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015443 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.015460 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015468 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.015483 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015490 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015828 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015849 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015859 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015879 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.015889 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="container-updater" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.016037 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.016046 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="account-replicator" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.016217 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" containerName="object-expirer" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.021510 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.024062 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.025612 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.029007 4835 scope.go:117] "RemoveContainer" containerID="c2bb2c50979d81b48db3da8d1503421df516cf45c6cb8eddcab8d29e7b89e40b" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.032790 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.045080 4835 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.064638 4835 scope.go:117] "RemoveContainer" containerID="1244aa8579be5d9284ebc00671702c6922c1ee0c32324cc3fb026ab5c3634876" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.091480 4835 scope.go:117] "RemoveContainer" containerID="115bbc64e704d41ae4244ee3df9b13e55015920e53f212f314acf31071b2bf14" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.110466 4835 scope.go:117] "RemoveContainer" containerID="57f650c2bf61220733002708c6de1b1f0b9bedf1608f819556e91bcbf73a479c" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.126199 4835 scope.go:117] "RemoveContainer" containerID="e1ae71b74256ecedefc7fbf253c43d8171b47774a342cb3954c7d0625c83ceb4" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.143168 4835 scope.go:117] "RemoveContainer" containerID="3f92566bd67947d9babfc2464c78a74c7f787b215d8cc4f97cb5e94b3c298f10" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.146488 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt7d8\" (UniqueName: \"kubernetes.io/projected/f2e2f8e4-eb90-4d97-8796-8f5d196577ce-kube-api-access-tt7d8\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.146534 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f2e2f8e4-eb90-4d97-8796-8f5d196577ce-etc-swift\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.146560 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f2e2f8e4-eb90-4d97-8796-8f5d196577ce-cache\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.146596 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f2e2f8e4-eb90-4d97-8796-8f5d196577ce-lock\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.146713 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.161927 4835 scope.go:117] "RemoveContainer" containerID="eb8a3ffd071b9c2b3f1584e981522df172dcb88a198689e7934e8735ecf4b50a" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.194408 4835 scope.go:117] "RemoveContainer" containerID="c9e3d55dd0fa17eedf107eb2b3e5dac364ff8077e8a1d4e0d9016998e9e14b2a" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.215043 4835 scope.go:117] "RemoveContainer" containerID="c677208601eec0c0fae2c620f112d3a005a89800a130f6a2742cfc65c7caf407" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.232612 4835 scope.go:117] "RemoveContainer" containerID="abaae4399d0309909ee61f1119476fc6ca124d2a5861328d8b9f177c3ee8d541" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.248304 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.248528 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") device mount path \"/mnt/openstack/pv10\"" pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.248548 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt7d8\" (UniqueName: \"kubernetes.io/projected/f2e2f8e4-eb90-4d97-8796-8f5d196577ce-kube-api-access-tt7d8\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.248591 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f2e2f8e4-eb90-4d97-8796-8f5d196577ce-etc-swift\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.248625 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f2e2f8e4-eb90-4d97-8796-8f5d196577ce-cache\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.248683 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f2e2f8e4-eb90-4d97-8796-8f5d196577ce-lock\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.249734 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f2e2f8e4-eb90-4d97-8796-8f5d196577ce-lock\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.249757 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f2e2f8e4-eb90-4d97-8796-8f5d196577ce-cache\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.251459 4835 scope.go:117] "RemoveContainer" containerID="24c70f8e7a963f439f9a715dbf780d7f583dd8ae4f27ef3b92192f1f9ffc56ea" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.253418 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f2e2f8e4-eb90-4d97-8796-8f5d196577ce-etc-swift\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.256870 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24c70f8e7a963f439f9a715dbf780d7f583dd8ae4f27ef3b92192f1f9ffc56ea\": container with ID starting with 24c70f8e7a963f439f9a715dbf780d7f583dd8ae4f27ef3b92192f1f9ffc56ea not found: ID does not exist" containerID="24c70f8e7a963f439f9a715dbf780d7f583dd8ae4f27ef3b92192f1f9ffc56ea" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.256933 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24c70f8e7a963f439f9a715dbf780d7f583dd8ae4f27ef3b92192f1f9ffc56ea"} err="failed to get container status \"24c70f8e7a963f439f9a715dbf780d7f583dd8ae4f27ef3b92192f1f9ffc56ea\": rpc error: code = NotFound desc = could not find container \"24c70f8e7a963f439f9a715dbf780d7f583dd8ae4f27ef3b92192f1f9ffc56ea\": container with ID starting with 24c70f8e7a963f439f9a715dbf780d7f583dd8ae4f27ef3b92192f1f9ffc56ea not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.256970 4835 scope.go:117] "RemoveContainer" containerID="2ccacf7054750fc124e6d667a5b3a4fca74d9159c050ae51185ce7c6b495bbe6" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.257329 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ccacf7054750fc124e6d667a5b3a4fca74d9159c050ae51185ce7c6b495bbe6\": container with ID starting with 2ccacf7054750fc124e6d667a5b3a4fca74d9159c050ae51185ce7c6b495bbe6 not found: ID does not exist" containerID="2ccacf7054750fc124e6d667a5b3a4fca74d9159c050ae51185ce7c6b495bbe6" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.257372 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ccacf7054750fc124e6d667a5b3a4fca74d9159c050ae51185ce7c6b495bbe6"} err="failed to get container status \"2ccacf7054750fc124e6d667a5b3a4fca74d9159c050ae51185ce7c6b495bbe6\": rpc error: code = NotFound desc = could not find container \"2ccacf7054750fc124e6d667a5b3a4fca74d9159c050ae51185ce7c6b495bbe6\": container with ID starting with 2ccacf7054750fc124e6d667a5b3a4fca74d9159c050ae51185ce7c6b495bbe6 not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.257400 4835 scope.go:117] "RemoveContainer" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.257699 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328\": container with ID starting with 8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328 not found: ID does not exist" containerID="8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.257726 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328"} err="failed to get container status \"8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328\": rpc error: code = NotFound desc = could not find container \"8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328\": container with ID starting with 8c4cf3f95117443917fb19196d11e99401bdee77261b71fff91b1e3715b29328 not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.257744 4835 scope.go:117] "RemoveContainer" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.257978 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2\": container with ID starting with 8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2 not found: ID does not exist" containerID="8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.258007 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2"} err="failed to get container status \"8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2\": rpc error: code = NotFound desc = could not find container \"8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2\": container with ID starting with 8363cd5ac27caae0c967f465d3ea98de522e6bd2b9748bfd438db020c4918fc2 not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.258025 4835 scope.go:117] "RemoveContainer" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.258256 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d\": container with ID starting with 258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d not found: ID does not exist" containerID="258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.258302 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d"} err="failed to get container status \"258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d\": rpc error: code = NotFound desc = could not find container \"258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d\": container with ID starting with 258cbae264fd7af86d488b1e1991bd6d29d7a59f6f1f3730a5482333f2b1614d not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.258324 4835 scope.go:117] "RemoveContainer" containerID="c2bb2c50979d81b48db3da8d1503421df516cf45c6cb8eddcab8d29e7b89e40b" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.258570 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2bb2c50979d81b48db3da8d1503421df516cf45c6cb8eddcab8d29e7b89e40b\": container with ID starting with c2bb2c50979d81b48db3da8d1503421df516cf45c6cb8eddcab8d29e7b89e40b not found: ID does not exist" containerID="c2bb2c50979d81b48db3da8d1503421df516cf45c6cb8eddcab8d29e7b89e40b" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.258594 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2bb2c50979d81b48db3da8d1503421df516cf45c6cb8eddcab8d29e7b89e40b"} err="failed to get container status \"c2bb2c50979d81b48db3da8d1503421df516cf45c6cb8eddcab8d29e7b89e40b\": rpc error: code = NotFound desc = could not find container \"c2bb2c50979d81b48db3da8d1503421df516cf45c6cb8eddcab8d29e7b89e40b\": container with ID starting with c2bb2c50979d81b48db3da8d1503421df516cf45c6cb8eddcab8d29e7b89e40b not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.258611 4835 scope.go:117] "RemoveContainer" containerID="1244aa8579be5d9284ebc00671702c6922c1ee0c32324cc3fb026ab5c3634876" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.258810 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1244aa8579be5d9284ebc00671702c6922c1ee0c32324cc3fb026ab5c3634876\": container with ID starting with 1244aa8579be5d9284ebc00671702c6922c1ee0c32324cc3fb026ab5c3634876 not found: ID does not exist" containerID="1244aa8579be5d9284ebc00671702c6922c1ee0c32324cc3fb026ab5c3634876" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.258836 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1244aa8579be5d9284ebc00671702c6922c1ee0c32324cc3fb026ab5c3634876"} err="failed to get container status \"1244aa8579be5d9284ebc00671702c6922c1ee0c32324cc3fb026ab5c3634876\": rpc error: code = NotFound desc = could not find container \"1244aa8579be5d9284ebc00671702c6922c1ee0c32324cc3fb026ab5c3634876\": container with ID starting with 1244aa8579be5d9284ebc00671702c6922c1ee0c32324cc3fb026ab5c3634876 not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.258853 4835 scope.go:117] "RemoveContainer" containerID="115bbc64e704d41ae4244ee3df9b13e55015920e53f212f314acf31071b2bf14" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.259047 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"115bbc64e704d41ae4244ee3df9b13e55015920e53f212f314acf31071b2bf14\": container with ID starting with 115bbc64e704d41ae4244ee3df9b13e55015920e53f212f314acf31071b2bf14 not found: ID does not exist" containerID="115bbc64e704d41ae4244ee3df9b13e55015920e53f212f314acf31071b2bf14" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.259072 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"115bbc64e704d41ae4244ee3df9b13e55015920e53f212f314acf31071b2bf14"} err="failed to get container status \"115bbc64e704d41ae4244ee3df9b13e55015920e53f212f314acf31071b2bf14\": rpc error: code = NotFound desc = could not find container \"115bbc64e704d41ae4244ee3df9b13e55015920e53f212f314acf31071b2bf14\": container with ID starting with 115bbc64e704d41ae4244ee3df9b13e55015920e53f212f314acf31071b2bf14 not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.259089 4835 scope.go:117] "RemoveContainer" containerID="57f650c2bf61220733002708c6de1b1f0b9bedf1608f819556e91bcbf73a479c" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.259286 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57f650c2bf61220733002708c6de1b1f0b9bedf1608f819556e91bcbf73a479c\": container with ID starting with 57f650c2bf61220733002708c6de1b1f0b9bedf1608f819556e91bcbf73a479c not found: ID does not exist" containerID="57f650c2bf61220733002708c6de1b1f0b9bedf1608f819556e91bcbf73a479c" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.259313 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57f650c2bf61220733002708c6de1b1f0b9bedf1608f819556e91bcbf73a479c"} err="failed to get container status \"57f650c2bf61220733002708c6de1b1f0b9bedf1608f819556e91bcbf73a479c\": rpc error: code = NotFound desc = could not find container \"57f650c2bf61220733002708c6de1b1f0b9bedf1608f819556e91bcbf73a479c\": container with ID starting with 57f650c2bf61220733002708c6de1b1f0b9bedf1608f819556e91bcbf73a479c not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.259329 4835 scope.go:117] "RemoveContainer" containerID="e1ae71b74256ecedefc7fbf253c43d8171b47774a342cb3954c7d0625c83ceb4" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.259592 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1ae71b74256ecedefc7fbf253c43d8171b47774a342cb3954c7d0625c83ceb4\": container with ID starting with e1ae71b74256ecedefc7fbf253c43d8171b47774a342cb3954c7d0625c83ceb4 not found: ID does not exist" containerID="e1ae71b74256ecedefc7fbf253c43d8171b47774a342cb3954c7d0625c83ceb4" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.259641 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1ae71b74256ecedefc7fbf253c43d8171b47774a342cb3954c7d0625c83ceb4"} err="failed to get container status \"e1ae71b74256ecedefc7fbf253c43d8171b47774a342cb3954c7d0625c83ceb4\": rpc error: code = NotFound desc = could not find container \"e1ae71b74256ecedefc7fbf253c43d8171b47774a342cb3954c7d0625c83ceb4\": container with ID starting with e1ae71b74256ecedefc7fbf253c43d8171b47774a342cb3954c7d0625c83ceb4 not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.259709 4835 scope.go:117] "RemoveContainer" containerID="3f92566bd67947d9babfc2464c78a74c7f787b215d8cc4f97cb5e94b3c298f10" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.259975 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f92566bd67947d9babfc2464c78a74c7f787b215d8cc4f97cb5e94b3c298f10\": container with ID starting with 3f92566bd67947d9babfc2464c78a74c7f787b215d8cc4f97cb5e94b3c298f10 not found: ID does not exist" containerID="3f92566bd67947d9babfc2464c78a74c7f787b215d8cc4f97cb5e94b3c298f10" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.260003 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f92566bd67947d9babfc2464c78a74c7f787b215d8cc4f97cb5e94b3c298f10"} err="failed to get container status \"3f92566bd67947d9babfc2464c78a74c7f787b215d8cc4f97cb5e94b3c298f10\": rpc error: code = NotFound desc = could not find container \"3f92566bd67947d9babfc2464c78a74c7f787b215d8cc4f97cb5e94b3c298f10\": container with ID starting with 3f92566bd67947d9babfc2464c78a74c7f787b215d8cc4f97cb5e94b3c298f10 not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.260020 4835 scope.go:117] "RemoveContainer" containerID="eb8a3ffd071b9c2b3f1584e981522df172dcb88a198689e7934e8735ecf4b50a" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.260258 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb8a3ffd071b9c2b3f1584e981522df172dcb88a198689e7934e8735ecf4b50a\": container with ID starting with eb8a3ffd071b9c2b3f1584e981522df172dcb88a198689e7934e8735ecf4b50a not found: ID does not exist" containerID="eb8a3ffd071b9c2b3f1584e981522df172dcb88a198689e7934e8735ecf4b50a" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.260285 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb8a3ffd071b9c2b3f1584e981522df172dcb88a198689e7934e8735ecf4b50a"} err="failed to get container status \"eb8a3ffd071b9c2b3f1584e981522df172dcb88a198689e7934e8735ecf4b50a\": rpc error: code = NotFound desc = could not find container \"eb8a3ffd071b9c2b3f1584e981522df172dcb88a198689e7934e8735ecf4b50a\": container with ID starting with eb8a3ffd071b9c2b3f1584e981522df172dcb88a198689e7934e8735ecf4b50a not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.260306 4835 scope.go:117] "RemoveContainer" containerID="c9e3d55dd0fa17eedf107eb2b3e5dac364ff8077e8a1d4e0d9016998e9e14b2a" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.260531 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9e3d55dd0fa17eedf107eb2b3e5dac364ff8077e8a1d4e0d9016998e9e14b2a\": container with ID starting with c9e3d55dd0fa17eedf107eb2b3e5dac364ff8077e8a1d4e0d9016998e9e14b2a not found: ID does not exist" containerID="c9e3d55dd0fa17eedf107eb2b3e5dac364ff8077e8a1d4e0d9016998e9e14b2a" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.260556 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9e3d55dd0fa17eedf107eb2b3e5dac364ff8077e8a1d4e0d9016998e9e14b2a"} err="failed to get container status \"c9e3d55dd0fa17eedf107eb2b3e5dac364ff8077e8a1d4e0d9016998e9e14b2a\": rpc error: code = NotFound desc = could not find container \"c9e3d55dd0fa17eedf107eb2b3e5dac364ff8077e8a1d4e0d9016998e9e14b2a\": container with ID starting with c9e3d55dd0fa17eedf107eb2b3e5dac364ff8077e8a1d4e0d9016998e9e14b2a not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.260574 4835 scope.go:117] "RemoveContainer" containerID="c677208601eec0c0fae2c620f112d3a005a89800a130f6a2742cfc65c7caf407" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.260782 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c677208601eec0c0fae2c620f112d3a005a89800a130f6a2742cfc65c7caf407\": container with ID starting with c677208601eec0c0fae2c620f112d3a005a89800a130f6a2742cfc65c7caf407 not found: ID does not exist" containerID="c677208601eec0c0fae2c620f112d3a005a89800a130f6a2742cfc65c7caf407" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.260808 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c677208601eec0c0fae2c620f112d3a005a89800a130f6a2742cfc65c7caf407"} err="failed to get container status \"c677208601eec0c0fae2c620f112d3a005a89800a130f6a2742cfc65c7caf407\": rpc error: code = NotFound desc = could not find container \"c677208601eec0c0fae2c620f112d3a005a89800a130f6a2742cfc65c7caf407\": container with ID starting with c677208601eec0c0fae2c620f112d3a005a89800a130f6a2742cfc65c7caf407 not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.260826 4835 scope.go:117] "RemoveContainer" containerID="abaae4399d0309909ee61f1119476fc6ca124d2a5861328d8b9f177c3ee8d541" Feb 01 07:49:00 crc kubenswrapper[4835]: E0201 07:49:00.261016 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abaae4399d0309909ee61f1119476fc6ca124d2a5861328d8b9f177c3ee8d541\": container with ID starting with abaae4399d0309909ee61f1119476fc6ca124d2a5861328d8b9f177c3ee8d541 not found: ID does not exist" containerID="abaae4399d0309909ee61f1119476fc6ca124d2a5861328d8b9f177c3ee8d541" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.261040 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abaae4399d0309909ee61f1119476fc6ca124d2a5861328d8b9f177c3ee8d541"} err="failed to get container status \"abaae4399d0309909ee61f1119476fc6ca124d2a5861328d8b9f177c3ee8d541\": rpc error: code = NotFound desc = could not find container \"abaae4399d0309909ee61f1119476fc6ca124d2a5861328d8b9f177c3ee8d541\": container with ID starting with abaae4399d0309909ee61f1119476fc6ca124d2a5861328d8b9f177c3ee8d541 not found: ID does not exist" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.269350 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt7d8\" (UniqueName: \"kubernetes.io/projected/f2e2f8e4-eb90-4d97-8796-8f5d196577ce-kube-api-access-tt7d8\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.269946 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"f2e2f8e4-eb90-4d97-8796-8f5d196577ce\") " pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.366400 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.652904 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Feb 01 07:49:00 crc kubenswrapper[4835]: W0201 07:49:00.657217 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2e2f8e4_eb90_4d97_8796_8f5d196577ce.slice/crio-8d63f4213b4e575f9fa7f6636745f4b4d555d78213c56efc75136d9adc404202 WatchSource:0}: Error finding container 8d63f4213b4e575f9fa7f6636745f4b4d555d78213c56efc75136d9adc404202: Status 404 returned error can't find the container with id 8d63f4213b4e575f9fa7f6636745f4b4d555d78213c56efc75136d9adc404202 Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.907550 4835 generic.go:334] "Generic (PLEG): container finished" podID="952a92f0-8bd4-4aa9-b437-af019f748380" containerID="6ee6f079615b79438f153e72935713c6df9931ac3842f4e28427eae45b23e997" exitCode=0 Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.907680 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgrp2" event={"ID":"952a92f0-8bd4-4aa9-b437-af019f748380","Type":"ContainerDied","Data":"6ee6f079615b79438f153e72935713c6df9931ac3842f4e28427eae45b23e997"} Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.911458 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"39cc2216f3110369d2fdb141e31cb3f0931f6db70a6aab1d853e606d8dca7dc4"} Feb 01 07:49:00 crc kubenswrapper[4835]: I0201 07:49:00.911490 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"8d63f4213b4e575f9fa7f6636745f4b4d555d78213c56efc75136d9adc404202"} Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.019169 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.020760 4835 scope.go:117] "RemoveContainer" containerID="5f562129e4e7a937bc85ef18cd0fc52c647af4abebeb9eed500135118d5fd888" Feb 01 07:49:01 crc kubenswrapper[4835]: E0201 07:49:01.021195 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.021900 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.022843 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.585674 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1edd7394-0f8e-4271-8774-f228946e62f3" path="/var/lib/kubelet/pods/1edd7394-0f8e-4271-8774-f228946e62f3/volumes" Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.940987 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgrp2" event={"ID":"952a92f0-8bd4-4aa9-b437-af019f748380","Type":"ContainerStarted","Data":"437ea2e04b47b66befe1da9b50a037a92ecf8e2a332384ade2c947d46974a8e9"} Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.946569 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="e4501caf2712efde1072e65cdc2495e22511b6ca50d0de32e4362eb3116d1f13" exitCode=1 Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.946599 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"41f50b96136eaae91636269f8bfa47862af4f96b115163aaffe156988450d4a4"} Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.946613 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"c542416827eeef621bc9aca8e48a29338e6bd9c000c191055db8f6ea89995b19"} Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.946622 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"6d543491d8f0729ab57b25dc009a5e53210189f8867bea16936e1ba49aa87463"} Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.946631 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"9dfb9fafa9f7b6aca2e897462158b2e6918ac0c51e08838a2af0060d19e450ec"} Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.946639 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"50187ec0044aa65d0dfc04bb190e11910e7dd6df21a714b706ced9753431b60b"} Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.946647 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"e4501caf2712efde1072e65cdc2495e22511b6ca50d0de32e4362eb3116d1f13"} Feb 01 07:49:01 crc kubenswrapper[4835]: I0201 07:49:01.961397 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xgrp2" podStartSLOduration=2.524816796 podStartE2EDuration="4.961376011s" podCreationTimestamp="2026-02-01 07:48:57 +0000 UTC" firstStartedPulling="2026-02-01 07:48:58.865298822 +0000 UTC m=+1611.985735266" lastFinishedPulling="2026-02-01 07:49:01.301858047 +0000 UTC m=+1614.422294481" observedRunningTime="2026-02-01 07:49:01.956102542 +0000 UTC m=+1615.076538976" watchObservedRunningTime="2026-02-01 07:49:01.961376011 +0000 UTC m=+1615.081812455" Feb 01 07:49:02 crc kubenswrapper[4835]: I0201 07:49:02.956630 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="c542416827eeef621bc9aca8e48a29338e6bd9c000c191055db8f6ea89995b19" exitCode=1 Feb 01 07:49:02 crc kubenswrapper[4835]: I0201 07:49:02.957362 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"c542416827eeef621bc9aca8e48a29338e6bd9c000c191055db8f6ea89995b19"} Feb 01 07:49:02 crc kubenswrapper[4835]: I0201 07:49:02.957392 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"b5b5df0939e8da11d020fa69c912de29cb26187bba91448c0e8b628b35f0b613"} Feb 01 07:49:02 crc kubenswrapper[4835]: I0201 07:49:02.957403 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"fe6bd8e84d6ed5717736c29de8d74a04026b73df093d00dea9d9e4f338cae07c"} Feb 01 07:49:02 crc kubenswrapper[4835]: I0201 07:49:02.957416 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"dff20be10edb56e0bf7c65fae7a9a4a50e30929b326c3cc3407aee5e7fed7c13"} Feb 01 07:49:02 crc kubenswrapper[4835]: I0201 07:49:02.957444 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"bb9e30181dc2e29c7cbb808fa255eca4e29643c8d5a1d41ffb4eedef8cfda794"} Feb 01 07:49:02 crc kubenswrapper[4835]: I0201 07:49:02.957453 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"024d1559d50fedb11ec83f9a36946428ba56b4e2ee849e2174dde39b0f4b6245"} Feb 01 07:49:02 crc kubenswrapper[4835]: I0201 07:49:02.957461 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"f37851d985a30497d9ff14d46c11d28293ba0304df3383819707502eddde0548"} Feb 01 07:49:03 crc kubenswrapper[4835]: I0201 07:49:03.976843 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="b5b5df0939e8da11d020fa69c912de29cb26187bba91448c0e8b628b35f0b613" exitCode=1 Feb 01 07:49:03 crc kubenswrapper[4835]: I0201 07:49:03.977298 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="e7803d57ef9f8ca7ab7e274227ef6c8f5664fb9604460e89a7dccb307d6d3835" exitCode=1 Feb 01 07:49:03 crc kubenswrapper[4835]: I0201 07:49:03.976939 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"b5b5df0939e8da11d020fa69c912de29cb26187bba91448c0e8b628b35f0b613"} Feb 01 07:49:03 crc kubenswrapper[4835]: I0201 07:49:03.977355 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"e7803d57ef9f8ca7ab7e274227ef6c8f5664fb9604460e89a7dccb307d6d3835"} Feb 01 07:49:03 crc kubenswrapper[4835]: I0201 07:49:03.977381 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"3ee15002522b3bad1068c43364071ba2181fc2a29d8e762e9687e95c5a3b7e1b"} Feb 01 07:49:03 crc kubenswrapper[4835]: I0201 07:49:03.977403 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"d66db67a8b5851acb3426fe89016568c6df1b70535718d50bab43208a03fa504"} Feb 01 07:49:03 crc kubenswrapper[4835]: I0201 07:49:03.978219 4835 scope.go:117] "RemoveContainer" containerID="e4501caf2712efde1072e65cdc2495e22511b6ca50d0de32e4362eb3116d1f13" Feb 01 07:49:03 crc kubenswrapper[4835]: I0201 07:49:03.978368 4835 scope.go:117] "RemoveContainer" containerID="c542416827eeef621bc9aca8e48a29338e6bd9c000c191055db8f6ea89995b19" Feb 01 07:49:03 crc kubenswrapper[4835]: I0201 07:49:03.978620 4835 scope.go:117] "RemoveContainer" containerID="b5b5df0939e8da11d020fa69c912de29cb26187bba91448c0e8b628b35f0b613" Feb 01 07:49:03 crc kubenswrapper[4835]: I0201 07:49:03.978706 4835 scope.go:117] "RemoveContainer" containerID="e7803d57ef9f8ca7ab7e274227ef6c8f5664fb9604460e89a7dccb307d6d3835" Feb 01 07:49:04 crc kubenswrapper[4835]: I0201 07:49:04.035869 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:04 crc kubenswrapper[4835]: I0201 07:49:04.992581 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="3675c87d6622f01fc61d145aa6b1e53ab778afbb1063428fc754c891679b40f6" exitCode=1 Feb 01 07:49:04 crc kubenswrapper[4835]: I0201 07:49:04.992913 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="014b284010003166efbc92474316abd90e420a8635aafb2c660fb04b1cfed454" exitCode=1 Feb 01 07:49:04 crc kubenswrapper[4835]: I0201 07:49:04.992645 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"fbf3c4e0172c9018417d341c8556f14bc2eaca0c5d6aaafefebf684016adda77"} Feb 01 07:49:04 crc kubenswrapper[4835]: I0201 07:49:04.992941 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"325fc3a889bbf20a4c90aad8f0f84caaf16c7870750328eef2f96dc599b7d3ea"} Feb 01 07:49:04 crc kubenswrapper[4835]: I0201 07:49:04.992951 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"3675c87d6622f01fc61d145aa6b1e53ab778afbb1063428fc754c891679b40f6"} Feb 01 07:49:04 crc kubenswrapper[4835]: I0201 07:49:04.992961 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"014b284010003166efbc92474316abd90e420a8635aafb2c660fb04b1cfed454"} Feb 01 07:49:04 crc kubenswrapper[4835]: I0201 07:49:04.992977 4835 scope.go:117] "RemoveContainer" containerID="c542416827eeef621bc9aca8e48a29338e6bd9c000c191055db8f6ea89995b19" Feb 01 07:49:04 crc kubenswrapper[4835]: I0201 07:49:04.994491 4835 scope.go:117] "RemoveContainer" containerID="014b284010003166efbc92474316abd90e420a8635aafb2c660fb04b1cfed454" Feb 01 07:49:04 crc kubenswrapper[4835]: I0201 07:49:04.994785 4835 scope.go:117] "RemoveContainer" containerID="3675c87d6622f01fc61d145aa6b1e53ab778afbb1063428fc754c891679b40f6" Feb 01 07:49:04 crc kubenswrapper[4835]: E0201 07:49:04.996290 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:49:05 crc kubenswrapper[4835]: I0201 07:49:05.020782 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:05 crc kubenswrapper[4835]: I0201 07:49:05.060194 4835 scope.go:117] "RemoveContainer" containerID="e4501caf2712efde1072e65cdc2495e22511b6ca50d0de32e4362eb3116d1f13" Feb 01 07:49:05 crc kubenswrapper[4835]: I0201 07:49:05.567544 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:49:05 crc kubenswrapper[4835]: I0201 07:49:05.567594 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:49:05 crc kubenswrapper[4835]: E0201 07:49:05.567954 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:49:06 crc kubenswrapper[4835]: I0201 07:49:06.016831 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="fbf3c4e0172c9018417d341c8556f14bc2eaca0c5d6aaafefebf684016adda77" exitCode=1 Feb 01 07:49:06 crc kubenswrapper[4835]: I0201 07:49:06.016885 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="325fc3a889bbf20a4c90aad8f0f84caaf16c7870750328eef2f96dc599b7d3ea" exitCode=1 Feb 01 07:49:06 crc kubenswrapper[4835]: I0201 07:49:06.016937 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"fbf3c4e0172c9018417d341c8556f14bc2eaca0c5d6aaafefebf684016adda77"} Feb 01 07:49:06 crc kubenswrapper[4835]: I0201 07:49:06.017008 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"325fc3a889bbf20a4c90aad8f0f84caaf16c7870750328eef2f96dc599b7d3ea"} Feb 01 07:49:06 crc kubenswrapper[4835]: I0201 07:49:06.017047 4835 scope.go:117] "RemoveContainer" containerID="e7803d57ef9f8ca7ab7e274227ef6c8f5664fb9604460e89a7dccb307d6d3835" Feb 01 07:49:06 crc kubenswrapper[4835]: I0201 07:49:06.017720 4835 scope.go:117] "RemoveContainer" containerID="014b284010003166efbc92474316abd90e420a8635aafb2c660fb04b1cfed454" Feb 01 07:49:06 crc kubenswrapper[4835]: I0201 07:49:06.017812 4835 scope.go:117] "RemoveContainer" containerID="3675c87d6622f01fc61d145aa6b1e53ab778afbb1063428fc754c891679b40f6" Feb 01 07:49:06 crc kubenswrapper[4835]: I0201 07:49:06.017926 4835 scope.go:117] "RemoveContainer" containerID="325fc3a889bbf20a4c90aad8f0f84caaf16c7870750328eef2f96dc599b7d3ea" Feb 01 07:49:06 crc kubenswrapper[4835]: I0201 07:49:06.017984 4835 scope.go:117] "RemoveContainer" containerID="fbf3c4e0172c9018417d341c8556f14bc2eaca0c5d6aaafefebf684016adda77" Feb 01 07:49:06 crc kubenswrapper[4835]: E0201 07:49:06.018323 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:49:06 crc kubenswrapper[4835]: I0201 07:49:06.078389 4835 scope.go:117] "RemoveContainer" containerID="b5b5df0939e8da11d020fa69c912de29cb26187bba91448c0e8b628b35f0b613" Feb 01 07:49:07 crc kubenswrapper[4835]: I0201 07:49:07.022350 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:07 crc kubenswrapper[4835]: I0201 07:49:07.022833 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:49:07 crc kubenswrapper[4835]: I0201 07:49:07.023613 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"397c73b00c04db01df3e8a36434377b8f8e589ca9c6353eeef20c5573cf758fc"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:49:07 crc kubenswrapper[4835]: I0201 07:49:07.023640 4835 scope.go:117] "RemoveContainer" containerID="5f562129e4e7a937bc85ef18cd0fc52c647af4abebeb9eed500135118d5fd888" Feb 01 07:49:07 crc kubenswrapper[4835]: I0201 07:49:07.023666 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://397c73b00c04db01df3e8a36434377b8f8e589ca9c6353eeef20c5573cf758fc" gracePeriod=30 Feb 01 07:49:07 crc kubenswrapper[4835]: I0201 07:49:07.027793 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:07 crc kubenswrapper[4835]: I0201 07:49:07.043207 4835 scope.go:117] "RemoveContainer" containerID="014b284010003166efbc92474316abd90e420a8635aafb2c660fb04b1cfed454" Feb 01 07:49:07 crc kubenswrapper[4835]: I0201 07:49:07.043302 4835 scope.go:117] "RemoveContainer" containerID="3675c87d6622f01fc61d145aa6b1e53ab778afbb1063428fc754c891679b40f6" Feb 01 07:49:07 crc kubenswrapper[4835]: I0201 07:49:07.043487 4835 scope.go:117] "RemoveContainer" containerID="325fc3a889bbf20a4c90aad8f0f84caaf16c7870750328eef2f96dc599b7d3ea" Feb 01 07:49:07 crc kubenswrapper[4835]: I0201 07:49:07.043538 4835 scope.go:117] "RemoveContainer" containerID="fbf3c4e0172c9018417d341c8556f14bc2eaca0c5d6aaafefebf684016adda77" Feb 01 07:49:07 crc kubenswrapper[4835]: E0201 07:49:07.043872 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:49:07 crc kubenswrapper[4835]: E0201 07:49:07.324912 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:49:08 crc kubenswrapper[4835]: I0201 07:49:08.051691 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="397c73b00c04db01df3e8a36434377b8f8e589ca9c6353eeef20c5573cf758fc" exitCode=0 Feb 01 07:49:08 crc kubenswrapper[4835]: I0201 07:49:08.051740 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"397c73b00c04db01df3e8a36434377b8f8e589ca9c6353eeef20c5573cf758fc"} Feb 01 07:49:08 crc kubenswrapper[4835]: I0201 07:49:08.052060 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"b8326a6e6498baf2c3c0e58ceebcaffe1160b44529dec51b48c761e8af76de68"} Feb 01 07:49:08 crc kubenswrapper[4835]: I0201 07:49:08.052284 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:49:08 crc kubenswrapper[4835]: I0201 07:49:08.052639 4835 scope.go:117] "RemoveContainer" containerID="5f562129e4e7a937bc85ef18cd0fc52c647af4abebeb9eed500135118d5fd888" Feb 01 07:49:08 crc kubenswrapper[4835]: I0201 07:49:08.172862 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:49:08 crc kubenswrapper[4835]: I0201 07:49:08.172942 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:49:08 crc kubenswrapper[4835]: I0201 07:49:08.234271 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:49:09 crc kubenswrapper[4835]: I0201 07:49:09.076548 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"ad6bd27a39205185373142d8b4201f9a5aa828ebf7e9c5908f8168428f8cd2f4"} Feb 01 07:49:09 crc kubenswrapper[4835]: I0201 07:49:09.076715 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:49:09 crc kubenswrapper[4835]: I0201 07:49:09.141392 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:49:09 crc kubenswrapper[4835]: I0201 07:49:09.204759 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xgrp2"] Feb 01 07:49:10 crc kubenswrapper[4835]: I0201 07:49:10.085990 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="ad6bd27a39205185373142d8b4201f9a5aa828ebf7e9c5908f8168428f8cd2f4" exitCode=1 Feb 01 07:49:10 crc kubenswrapper[4835]: I0201 07:49:10.086093 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"ad6bd27a39205185373142d8b4201f9a5aa828ebf7e9c5908f8168428f8cd2f4"} Feb 01 07:49:10 crc kubenswrapper[4835]: I0201 07:49:10.086182 4835 scope.go:117] "RemoveContainer" containerID="5f562129e4e7a937bc85ef18cd0fc52c647af4abebeb9eed500135118d5fd888" Feb 01 07:49:10 crc kubenswrapper[4835]: I0201 07:49:10.087139 4835 scope.go:117] "RemoveContainer" containerID="ad6bd27a39205185373142d8b4201f9a5aa828ebf7e9c5908f8168428f8cd2f4" Feb 01 07:49:10 crc kubenswrapper[4835]: E0201 07:49:10.087551 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:49:11 crc kubenswrapper[4835]: I0201 07:49:11.103046 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xgrp2" podUID="952a92f0-8bd4-4aa9-b437-af019f748380" containerName="registry-server" containerID="cri-o://437ea2e04b47b66befe1da9b50a037a92ecf8e2a332384ade2c947d46974a8e9" gracePeriod=2 Feb 01 07:49:11 crc kubenswrapper[4835]: I0201 07:49:11.104271 4835 scope.go:117] "RemoveContainer" containerID="ad6bd27a39205185373142d8b4201f9a5aa828ebf7e9c5908f8168428f8cd2f4" Feb 01 07:49:11 crc kubenswrapper[4835]: E0201 07:49:11.104522 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:49:11 crc kubenswrapper[4835]: I0201 07:49:11.568612 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:49:11 crc kubenswrapper[4835]: E0201 07:49:11.569056 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:49:11 crc kubenswrapper[4835]: I0201 07:49:11.600962 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:49:11 crc kubenswrapper[4835]: I0201 07:49:11.763229 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds6g4\" (UniqueName: \"kubernetes.io/projected/952a92f0-8bd4-4aa9-b437-af019f748380-kube-api-access-ds6g4\") pod \"952a92f0-8bd4-4aa9-b437-af019f748380\" (UID: \"952a92f0-8bd4-4aa9-b437-af019f748380\") " Feb 01 07:49:11 crc kubenswrapper[4835]: I0201 07:49:11.763591 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/952a92f0-8bd4-4aa9-b437-af019f748380-catalog-content\") pod \"952a92f0-8bd4-4aa9-b437-af019f748380\" (UID: \"952a92f0-8bd4-4aa9-b437-af019f748380\") " Feb 01 07:49:11 crc kubenswrapper[4835]: I0201 07:49:11.763809 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/952a92f0-8bd4-4aa9-b437-af019f748380-utilities\") pod \"952a92f0-8bd4-4aa9-b437-af019f748380\" (UID: \"952a92f0-8bd4-4aa9-b437-af019f748380\") " Feb 01 07:49:11 crc kubenswrapper[4835]: I0201 07:49:11.764766 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/952a92f0-8bd4-4aa9-b437-af019f748380-utilities" (OuterVolumeSpecName: "utilities") pod "952a92f0-8bd4-4aa9-b437-af019f748380" (UID: "952a92f0-8bd4-4aa9-b437-af019f748380"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:49:11 crc kubenswrapper[4835]: I0201 07:49:11.775704 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/952a92f0-8bd4-4aa9-b437-af019f748380-kube-api-access-ds6g4" (OuterVolumeSpecName: "kube-api-access-ds6g4") pod "952a92f0-8bd4-4aa9-b437-af019f748380" (UID: "952a92f0-8bd4-4aa9-b437-af019f748380"). InnerVolumeSpecName "kube-api-access-ds6g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:49:11 crc kubenswrapper[4835]: I0201 07:49:11.826840 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/952a92f0-8bd4-4aa9-b437-af019f748380-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "952a92f0-8bd4-4aa9-b437-af019f748380" (UID: "952a92f0-8bd4-4aa9-b437-af019f748380"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:49:11 crc kubenswrapper[4835]: I0201 07:49:11.865755 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds6g4\" (UniqueName: \"kubernetes.io/projected/952a92f0-8bd4-4aa9-b437-af019f748380-kube-api-access-ds6g4\") on node \"crc\" DevicePath \"\"" Feb 01 07:49:11 crc kubenswrapper[4835]: I0201 07:49:11.865805 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/952a92f0-8bd4-4aa9-b437-af019f748380-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:49:11 crc kubenswrapper[4835]: I0201 07:49:11.865824 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/952a92f0-8bd4-4aa9-b437-af019f748380-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.113847 4835 generic.go:334] "Generic (PLEG): container finished" podID="952a92f0-8bd4-4aa9-b437-af019f748380" containerID="437ea2e04b47b66befe1da9b50a037a92ecf8e2a332384ade2c947d46974a8e9" exitCode=0 Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.113904 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgrp2" event={"ID":"952a92f0-8bd4-4aa9-b437-af019f748380","Type":"ContainerDied","Data":"437ea2e04b47b66befe1da9b50a037a92ecf8e2a332384ade2c947d46974a8e9"} Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.113938 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xgrp2" event={"ID":"952a92f0-8bd4-4aa9-b437-af019f748380","Type":"ContainerDied","Data":"531b752c0353bd0cf7d0d623b4ef2f05ab183ae8b42ef50855bcea2f7ac14cc4"} Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.113962 4835 scope.go:117] "RemoveContainer" containerID="437ea2e04b47b66befe1da9b50a037a92ecf8e2a332384ade2c947d46974a8e9" Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.114093 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xgrp2" Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.136900 4835 scope.go:117] "RemoveContainer" containerID="6ee6f079615b79438f153e72935713c6df9931ac3842f4e28427eae45b23e997" Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.163316 4835 scope.go:117] "RemoveContainer" containerID="811b21cb733038396715d36077fb049854b3757f440863dbcefa75a9320e20ee" Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.222739 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xgrp2"] Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.237242 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xgrp2"] Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.238026 4835 scope.go:117] "RemoveContainer" containerID="437ea2e04b47b66befe1da9b50a037a92ecf8e2a332384ade2c947d46974a8e9" Feb 01 07:49:12 crc kubenswrapper[4835]: E0201 07:49:12.238466 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"437ea2e04b47b66befe1da9b50a037a92ecf8e2a332384ade2c947d46974a8e9\": container with ID starting with 437ea2e04b47b66befe1da9b50a037a92ecf8e2a332384ade2c947d46974a8e9 not found: ID does not exist" containerID="437ea2e04b47b66befe1da9b50a037a92ecf8e2a332384ade2c947d46974a8e9" Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.238492 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"437ea2e04b47b66befe1da9b50a037a92ecf8e2a332384ade2c947d46974a8e9"} err="failed to get container status \"437ea2e04b47b66befe1da9b50a037a92ecf8e2a332384ade2c947d46974a8e9\": rpc error: code = NotFound desc = could not find container \"437ea2e04b47b66befe1da9b50a037a92ecf8e2a332384ade2c947d46974a8e9\": container with ID starting with 437ea2e04b47b66befe1da9b50a037a92ecf8e2a332384ade2c947d46974a8e9 not found: ID does not exist" Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.238513 4835 scope.go:117] "RemoveContainer" containerID="6ee6f079615b79438f153e72935713c6df9931ac3842f4e28427eae45b23e997" Feb 01 07:49:12 crc kubenswrapper[4835]: E0201 07:49:12.238864 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ee6f079615b79438f153e72935713c6df9931ac3842f4e28427eae45b23e997\": container with ID starting with 6ee6f079615b79438f153e72935713c6df9931ac3842f4e28427eae45b23e997 not found: ID does not exist" containerID="6ee6f079615b79438f153e72935713c6df9931ac3842f4e28427eae45b23e997" Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.238941 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ee6f079615b79438f153e72935713c6df9931ac3842f4e28427eae45b23e997"} err="failed to get container status \"6ee6f079615b79438f153e72935713c6df9931ac3842f4e28427eae45b23e997\": rpc error: code = NotFound desc = could not find container \"6ee6f079615b79438f153e72935713c6df9931ac3842f4e28427eae45b23e997\": container with ID starting with 6ee6f079615b79438f153e72935713c6df9931ac3842f4e28427eae45b23e997 not found: ID does not exist" Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.238997 4835 scope.go:117] "RemoveContainer" containerID="811b21cb733038396715d36077fb049854b3757f440863dbcefa75a9320e20ee" Feb 01 07:49:12 crc kubenswrapper[4835]: E0201 07:49:12.239452 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"811b21cb733038396715d36077fb049854b3757f440863dbcefa75a9320e20ee\": container with ID starting with 811b21cb733038396715d36077fb049854b3757f440863dbcefa75a9320e20ee not found: ID does not exist" containerID="811b21cb733038396715d36077fb049854b3757f440863dbcefa75a9320e20ee" Feb 01 07:49:12 crc kubenswrapper[4835]: I0201 07:49:12.239522 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"811b21cb733038396715d36077fb049854b3757f440863dbcefa75a9320e20ee"} err="failed to get container status \"811b21cb733038396715d36077fb049854b3757f440863dbcefa75a9320e20ee\": rpc error: code = NotFound desc = could not find container \"811b21cb733038396715d36077fb049854b3757f440863dbcefa75a9320e20ee\": container with ID starting with 811b21cb733038396715d36077fb049854b3757f440863dbcefa75a9320e20ee not found: ID does not exist" Feb 01 07:49:13 crc kubenswrapper[4835]: I0201 07:49:13.018903 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:49:13 crc kubenswrapper[4835]: I0201 07:49:13.019952 4835 scope.go:117] "RemoveContainer" containerID="ad6bd27a39205185373142d8b4201f9a5aa828ebf7e9c5908f8168428f8cd2f4" Feb 01 07:49:13 crc kubenswrapper[4835]: E0201 07:49:13.020485 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:49:13 crc kubenswrapper[4835]: I0201 07:49:13.022602 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:13 crc kubenswrapper[4835]: I0201 07:49:13.022640 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:13 crc kubenswrapper[4835]: I0201 07:49:13.582577 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="952a92f0-8bd4-4aa9-b437-af019f748380" path="/var/lib/kubelet/pods/952a92f0-8bd4-4aa9-b437-af019f748380/volumes" Feb 01 07:49:15 crc kubenswrapper[4835]: I0201 07:49:15.021776 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:16 crc kubenswrapper[4835]: I0201 07:49:16.023142 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:19 crc kubenswrapper[4835]: I0201 07:49:19.021396 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:19 crc kubenswrapper[4835]: I0201 07:49:19.021820 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:49:19 crc kubenswrapper[4835]: I0201 07:49:19.022824 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"b8326a6e6498baf2c3c0e58ceebcaffe1160b44529dec51b48c761e8af76de68"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:49:19 crc kubenswrapper[4835]: I0201 07:49:19.022854 4835 scope.go:117] "RemoveContainer" containerID="ad6bd27a39205185373142d8b4201f9a5aa828ebf7e9c5908f8168428f8cd2f4" Feb 01 07:49:19 crc kubenswrapper[4835]: I0201 07:49:19.022901 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://b8326a6e6498baf2c3c0e58ceebcaffe1160b44529dec51b48c761e8af76de68" gracePeriod=30 Feb 01 07:49:19 crc kubenswrapper[4835]: I0201 07:49:19.024125 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:19 crc kubenswrapper[4835]: I0201 07:49:19.194251 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="b8326a6e6498baf2c3c0e58ceebcaffe1160b44529dec51b48c761e8af76de68" exitCode=0 Feb 01 07:49:19 crc kubenswrapper[4835]: I0201 07:49:19.194319 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"b8326a6e6498baf2c3c0e58ceebcaffe1160b44529dec51b48c761e8af76de68"} Feb 01 07:49:19 crc kubenswrapper[4835]: I0201 07:49:19.194400 4835 scope.go:117] "RemoveContainer" containerID="397c73b00c04db01df3e8a36434377b8f8e589ca9c6353eeef20c5573cf758fc" Feb 01 07:49:19 crc kubenswrapper[4835]: E0201 07:49:19.900122 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:49:20 crc kubenswrapper[4835]: I0201 07:49:20.021362 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:20 crc kubenswrapper[4835]: I0201 07:49:20.206111 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"dfcbc8158540e8b14b8f031f0ed70eccc3b8694b265776d8471950ed2ff440a3"} Feb 01 07:49:20 crc kubenswrapper[4835]: I0201 07:49:20.206549 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:49:20 crc kubenswrapper[4835]: I0201 07:49:20.206900 4835 scope.go:117] "RemoveContainer" containerID="ad6bd27a39205185373142d8b4201f9a5aa828ebf7e9c5908f8168428f8cd2f4" Feb 01 07:49:20 crc kubenswrapper[4835]: E0201 07:49:20.207205 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:49:20 crc kubenswrapper[4835]: I0201 07:49:20.566783 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:49:20 crc kubenswrapper[4835]: I0201 07:49:20.566827 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:49:20 crc kubenswrapper[4835]: I0201 07:49:20.567052 4835 scope.go:117] "RemoveContainer" containerID="014b284010003166efbc92474316abd90e420a8635aafb2c660fb04b1cfed454" Feb 01 07:49:20 crc kubenswrapper[4835]: I0201 07:49:20.567125 4835 scope.go:117] "RemoveContainer" containerID="3675c87d6622f01fc61d145aa6b1e53ab778afbb1063428fc754c891679b40f6" Feb 01 07:49:20 crc kubenswrapper[4835]: I0201 07:49:20.567211 4835 scope.go:117] "RemoveContainer" containerID="325fc3a889bbf20a4c90aad8f0f84caaf16c7870750328eef2f96dc599b7d3ea" Feb 01 07:49:20 crc kubenswrapper[4835]: I0201 07:49:20.567252 4835 scope.go:117] "RemoveContainer" containerID="fbf3c4e0172c9018417d341c8556f14bc2eaca0c5d6aaafefebf684016adda77" Feb 01 07:49:20 crc kubenswrapper[4835]: E0201 07:49:20.764704 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:49:21 crc kubenswrapper[4835]: I0201 07:49:21.218085 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44"} Feb 01 07:49:21 crc kubenswrapper[4835]: I0201 07:49:21.218757 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:49:21 crc kubenswrapper[4835]: I0201 07:49:21.219218 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:49:21 crc kubenswrapper[4835]: E0201 07:49:21.219577 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:49:21 crc kubenswrapper[4835]: I0201 07:49:21.251134 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="fff9c47d9f9751dda6b5a7766119279bc23c2f2edc3650a927e4a08bcbc7e47a" exitCode=1 Feb 01 07:49:21 crc kubenswrapper[4835]: I0201 07:49:21.251268 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"4a22a49fb5ce65461b0b377a3d52609ecf4a1ff09a43966ceaa98314c4a6d9d8"} Feb 01 07:49:21 crc kubenswrapper[4835]: I0201 07:49:21.251321 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"40689cac6b013611eb6f85e7cfc6082a3c8887e457da164c307a5d7ce31cf40b"} Feb 01 07:49:21 crc kubenswrapper[4835]: I0201 07:49:21.251344 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"fff9c47d9f9751dda6b5a7766119279bc23c2f2edc3650a927e4a08bcbc7e47a"} Feb 01 07:49:21 crc kubenswrapper[4835]: I0201 07:49:21.251400 4835 scope.go:117] "RemoveContainer" containerID="014b284010003166efbc92474316abd90e420a8635aafb2c660fb04b1cfed454" Feb 01 07:49:21 crc kubenswrapper[4835]: I0201 07:49:21.251833 4835 scope.go:117] "RemoveContainer" containerID="ad6bd27a39205185373142d8b4201f9a5aa828ebf7e9c5908f8168428f8cd2f4" Feb 01 07:49:21 crc kubenswrapper[4835]: E0201 07:49:21.252032 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.270590 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="4a22a49fb5ce65461b0b377a3d52609ecf4a1ff09a43966ceaa98314c4a6d9d8" exitCode=1 Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.270652 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="40689cac6b013611eb6f85e7cfc6082a3c8887e457da164c307a5d7ce31cf40b" exitCode=1 Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.270682 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="0d957b4b4419b3ad0555e1431ee8a63c3430d586ceef39de00ff73272ceae03e" exitCode=1 Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.270784 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"4a22a49fb5ce65461b0b377a3d52609ecf4a1ff09a43966ceaa98314c4a6d9d8"} Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.270831 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"40689cac6b013611eb6f85e7cfc6082a3c8887e457da164c307a5d7ce31cf40b"} Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.270856 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"0d957b4b4419b3ad0555e1431ee8a63c3430d586ceef39de00ff73272ceae03e"} Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.270899 4835 scope.go:117] "RemoveContainer" containerID="325fc3a889bbf20a4c90aad8f0f84caaf16c7870750328eef2f96dc599b7d3ea" Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.272293 4835 scope.go:117] "RemoveContainer" containerID="fff9c47d9f9751dda6b5a7766119279bc23c2f2edc3650a927e4a08bcbc7e47a" Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.272519 4835 scope.go:117] "RemoveContainer" containerID="40689cac6b013611eb6f85e7cfc6082a3c8887e457da164c307a5d7ce31cf40b" Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.272804 4835 scope.go:117] "RemoveContainer" containerID="4a22a49fb5ce65461b0b377a3d52609ecf4a1ff09a43966ceaa98314c4a6d9d8" Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.272931 4835 scope.go:117] "RemoveContainer" containerID="0d957b4b4419b3ad0555e1431ee8a63c3430d586ceef39de00ff73272ceae03e" Feb 01 07:49:22 crc kubenswrapper[4835]: E0201 07:49:22.273888 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.281793 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" exitCode=1 Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.281838 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44"} Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.282702 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.282737 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:49:22 crc kubenswrapper[4835]: E0201 07:49:22.283121 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.314532 4835 scope.go:117] "RemoveContainer" containerID="3675c87d6622f01fc61d145aa6b1e53ab778afbb1063428fc754c891679b40f6" Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.367053 4835 scope.go:117] "RemoveContainer" containerID="fbf3c4e0172c9018417d341c8556f14bc2eaca0c5d6aaafefebf684016adda77" Feb 01 07:49:22 crc kubenswrapper[4835]: I0201 07:49:22.404549 4835 scope.go:117] "RemoveContainer" containerID="a4040cacf4e44fe2fba71125e67d7fed8b0dd9e27ff15ee01f56721f2ae8ee2d" Feb 01 07:49:23 crc kubenswrapper[4835]: I0201 07:49:23.307206 4835 scope.go:117] "RemoveContainer" containerID="fff9c47d9f9751dda6b5a7766119279bc23c2f2edc3650a927e4a08bcbc7e47a" Feb 01 07:49:23 crc kubenswrapper[4835]: I0201 07:49:23.307726 4835 scope.go:117] "RemoveContainer" containerID="40689cac6b013611eb6f85e7cfc6082a3c8887e457da164c307a5d7ce31cf40b" Feb 01 07:49:23 crc kubenswrapper[4835]: I0201 07:49:23.307903 4835 scope.go:117] "RemoveContainer" containerID="4a22a49fb5ce65461b0b377a3d52609ecf4a1ff09a43966ceaa98314c4a6d9d8" Feb 01 07:49:23 crc kubenswrapper[4835]: I0201 07:49:23.307969 4835 scope.go:117] "RemoveContainer" containerID="0d957b4b4419b3ad0555e1431ee8a63c3430d586ceef39de00ff73272ceae03e" Feb 01 07:49:23 crc kubenswrapper[4835]: E0201 07:49:23.308487 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:49:23 crc kubenswrapper[4835]: I0201 07:49:23.310759 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:49:23 crc kubenswrapper[4835]: I0201 07:49:23.310828 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:49:23 crc kubenswrapper[4835]: E0201 07:49:23.311202 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:49:23 crc kubenswrapper[4835]: I0201 07:49:23.566970 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:49:23 crc kubenswrapper[4835]: E0201 07:49:23.567358 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:49:24 crc kubenswrapper[4835]: I0201 07:49:24.535828 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:49:24 crc kubenswrapper[4835]: I0201 07:49:24.536882 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:49:24 crc kubenswrapper[4835]: I0201 07:49:24.536905 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:49:24 crc kubenswrapper[4835]: E0201 07:49:24.537449 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:49:25 crc kubenswrapper[4835]: I0201 07:49:25.023381 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:25 crc kubenswrapper[4835]: I0201 07:49:25.023483 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:28 crc kubenswrapper[4835]: I0201 07:49:28.021572 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:29 crc kubenswrapper[4835]: I0201 07:49:29.860476 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xtg6f"] Feb 01 07:49:29 crc kubenswrapper[4835]: E0201 07:49:29.860938 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952a92f0-8bd4-4aa9-b437-af019f748380" containerName="extract-utilities" Feb 01 07:49:29 crc kubenswrapper[4835]: I0201 07:49:29.860952 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="952a92f0-8bd4-4aa9-b437-af019f748380" containerName="extract-utilities" Feb 01 07:49:29 crc kubenswrapper[4835]: E0201 07:49:29.860963 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952a92f0-8bd4-4aa9-b437-af019f748380" containerName="registry-server" Feb 01 07:49:29 crc kubenswrapper[4835]: I0201 07:49:29.860969 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="952a92f0-8bd4-4aa9-b437-af019f748380" containerName="registry-server" Feb 01 07:49:29 crc kubenswrapper[4835]: E0201 07:49:29.860982 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952a92f0-8bd4-4aa9-b437-af019f748380" containerName="extract-content" Feb 01 07:49:29 crc kubenswrapper[4835]: I0201 07:49:29.860987 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="952a92f0-8bd4-4aa9-b437-af019f748380" containerName="extract-content" Feb 01 07:49:29 crc kubenswrapper[4835]: I0201 07:49:29.861124 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="952a92f0-8bd4-4aa9-b437-af019f748380" containerName="registry-server" Feb 01 07:49:29 crc kubenswrapper[4835]: I0201 07:49:29.866537 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:29 crc kubenswrapper[4835]: I0201 07:49:29.870262 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xtg6f"] Feb 01 07:49:29 crc kubenswrapper[4835]: I0201 07:49:29.979390 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/124384c0-3e99-4689-bccb-5f0d29df89ee-catalog-content\") pod \"redhat-operators-xtg6f\" (UID: \"124384c0-3e99-4689-bccb-5f0d29df89ee\") " pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:29 crc kubenswrapper[4835]: I0201 07:49:29.979467 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/124384c0-3e99-4689-bccb-5f0d29df89ee-utilities\") pod \"redhat-operators-xtg6f\" (UID: \"124384c0-3e99-4689-bccb-5f0d29df89ee\") " pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:29 crc kubenswrapper[4835]: I0201 07:49:29.979497 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbqn5\" (UniqueName: \"kubernetes.io/projected/124384c0-3e99-4689-bccb-5f0d29df89ee-kube-api-access-tbqn5\") pod \"redhat-operators-xtg6f\" (UID: \"124384c0-3e99-4689-bccb-5f0d29df89ee\") " pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:30 crc kubenswrapper[4835]: I0201 07:49:30.023267 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:30 crc kubenswrapper[4835]: I0201 07:49:30.081376 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/124384c0-3e99-4689-bccb-5f0d29df89ee-catalog-content\") pod \"redhat-operators-xtg6f\" (UID: \"124384c0-3e99-4689-bccb-5f0d29df89ee\") " pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:30 crc kubenswrapper[4835]: I0201 07:49:30.081475 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/124384c0-3e99-4689-bccb-5f0d29df89ee-utilities\") pod \"redhat-operators-xtg6f\" (UID: \"124384c0-3e99-4689-bccb-5f0d29df89ee\") " pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:30 crc kubenswrapper[4835]: I0201 07:49:30.081512 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbqn5\" (UniqueName: \"kubernetes.io/projected/124384c0-3e99-4689-bccb-5f0d29df89ee-kube-api-access-tbqn5\") pod \"redhat-operators-xtg6f\" (UID: \"124384c0-3e99-4689-bccb-5f0d29df89ee\") " pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:30 crc kubenswrapper[4835]: I0201 07:49:30.082007 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/124384c0-3e99-4689-bccb-5f0d29df89ee-utilities\") pod \"redhat-operators-xtg6f\" (UID: \"124384c0-3e99-4689-bccb-5f0d29df89ee\") " pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:30 crc kubenswrapper[4835]: I0201 07:49:30.082007 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/124384c0-3e99-4689-bccb-5f0d29df89ee-catalog-content\") pod \"redhat-operators-xtg6f\" (UID: \"124384c0-3e99-4689-bccb-5f0d29df89ee\") " pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:30 crc kubenswrapper[4835]: I0201 07:49:30.113080 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbqn5\" (UniqueName: \"kubernetes.io/projected/124384c0-3e99-4689-bccb-5f0d29df89ee-kube-api-access-tbqn5\") pod \"redhat-operators-xtg6f\" (UID: \"124384c0-3e99-4689-bccb-5f0d29df89ee\") " pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:30 crc kubenswrapper[4835]: I0201 07:49:30.237134 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:30 crc kubenswrapper[4835]: I0201 07:49:30.698135 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xtg6f"] Feb 01 07:49:30 crc kubenswrapper[4835]: W0201 07:49:30.699060 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod124384c0_3e99_4689_bccb_5f0d29df89ee.slice/crio-7d22d53e511d179e4981719f8ecee51bc2645b3c446221ec04b613a6ca27ca6b WatchSource:0}: Error finding container 7d22d53e511d179e4981719f8ecee51bc2645b3c446221ec04b613a6ca27ca6b: Status 404 returned error can't find the container with id 7d22d53e511d179e4981719f8ecee51bc2645b3c446221ec04b613a6ca27ca6b Feb 01 07:49:31 crc kubenswrapper[4835]: I0201 07:49:31.027275 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:31 crc kubenswrapper[4835]: I0201 07:49:31.027612 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:49:31 crc kubenswrapper[4835]: I0201 07:49:31.028343 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"dfcbc8158540e8b14b8f031f0ed70eccc3b8694b265776d8471950ed2ff440a3"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:49:31 crc kubenswrapper[4835]: I0201 07:49:31.028362 4835 scope.go:117] "RemoveContainer" containerID="ad6bd27a39205185373142d8b4201f9a5aa828ebf7e9c5908f8168428f8cd2f4" Feb 01 07:49:31 crc kubenswrapper[4835]: I0201 07:49:31.028389 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://dfcbc8158540e8b14b8f031f0ed70eccc3b8694b265776d8471950ed2ff440a3" gracePeriod=30 Feb 01 07:49:31 crc kubenswrapper[4835]: I0201 07:49:31.032651 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:31 crc kubenswrapper[4835]: I0201 07:49:31.381580 4835 generic.go:334] "Generic (PLEG): container finished" podID="124384c0-3e99-4689-bccb-5f0d29df89ee" containerID="7734274ed7d5cd73d5f9493526243959230d16ca4615808132012c7c9f7ca0ca" exitCode=0 Feb 01 07:49:31 crc kubenswrapper[4835]: I0201 07:49:31.381675 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtg6f" event={"ID":"124384c0-3e99-4689-bccb-5f0d29df89ee","Type":"ContainerDied","Data":"7734274ed7d5cd73d5f9493526243959230d16ca4615808132012c7c9f7ca0ca"} Feb 01 07:49:31 crc kubenswrapper[4835]: I0201 07:49:31.381891 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtg6f" event={"ID":"124384c0-3e99-4689-bccb-5f0d29df89ee","Type":"ContainerStarted","Data":"7d22d53e511d179e4981719f8ecee51bc2645b3c446221ec04b613a6ca27ca6b"} Feb 01 07:49:31 crc kubenswrapper[4835]: I0201 07:49:31.386252 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="dfcbc8158540e8b14b8f031f0ed70eccc3b8694b265776d8471950ed2ff440a3" exitCode=0 Feb 01 07:49:31 crc kubenswrapper[4835]: I0201 07:49:31.386279 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"dfcbc8158540e8b14b8f031f0ed70eccc3b8694b265776d8471950ed2ff440a3"} Feb 01 07:49:31 crc kubenswrapper[4835]: I0201 07:49:31.386320 4835 scope.go:117] "RemoveContainer" containerID="b8326a6e6498baf2c3c0e58ceebcaffe1160b44529dec51b48c761e8af76de68" Feb 01 07:49:32 crc kubenswrapper[4835]: I0201 07:49:32.395292 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtg6f" event={"ID":"124384c0-3e99-4689-bccb-5f0d29df89ee","Type":"ContainerStarted","Data":"6380b3f2fd17c078ccd2827cbc2d6f324f8e7503a1a93babd7807fd95707479b"} Feb 01 07:49:32 crc kubenswrapper[4835]: I0201 07:49:32.405062 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42"} Feb 01 07:49:32 crc kubenswrapper[4835]: I0201 07:49:32.405118 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"84397ad55d99116c1a2942bd910f7ca4d56420e65b59c11c397c729684823cf9"} Feb 01 07:49:32 crc kubenswrapper[4835]: I0201 07:49:32.405400 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:49:32 crc kubenswrapper[4835]: I0201 07:49:32.407204 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:33 crc kubenswrapper[4835]: I0201 07:49:33.432593 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42" exitCode=1 Feb 01 07:49:33 crc kubenswrapper[4835]: I0201 07:49:33.432800 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42"} Feb 01 07:49:33 crc kubenswrapper[4835]: I0201 07:49:33.433229 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:49:33 crc kubenswrapper[4835]: I0201 07:49:33.433272 4835 scope.go:117] "RemoveContainer" containerID="ad6bd27a39205185373142d8b4201f9a5aa828ebf7e9c5908f8168428f8cd2f4" Feb 01 07:49:33 crc kubenswrapper[4835]: I0201 07:49:33.433395 4835 scope.go:117] "RemoveContainer" containerID="ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42" Feb 01 07:49:33 crc kubenswrapper[4835]: E0201 07:49:33.433891 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:49:33 crc kubenswrapper[4835]: I0201 07:49:33.449501 4835 generic.go:334] "Generic (PLEG): container finished" podID="124384c0-3e99-4689-bccb-5f0d29df89ee" containerID="6380b3f2fd17c078ccd2827cbc2d6f324f8e7503a1a93babd7807fd95707479b" exitCode=0 Feb 01 07:49:33 crc kubenswrapper[4835]: I0201 07:49:33.449564 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtg6f" event={"ID":"124384c0-3e99-4689-bccb-5f0d29df89ee","Type":"ContainerDied","Data":"6380b3f2fd17c078ccd2827cbc2d6f324f8e7503a1a93babd7807fd95707479b"} Feb 01 07:49:33 crc kubenswrapper[4835]: I0201 07:49:33.568298 4835 scope.go:117] "RemoveContainer" containerID="fff9c47d9f9751dda6b5a7766119279bc23c2f2edc3650a927e4a08bcbc7e47a" Feb 01 07:49:33 crc kubenswrapper[4835]: I0201 07:49:33.568394 4835 scope.go:117] "RemoveContainer" containerID="40689cac6b013611eb6f85e7cfc6082a3c8887e457da164c307a5d7ce31cf40b" Feb 01 07:49:33 crc kubenswrapper[4835]: I0201 07:49:33.568562 4835 scope.go:117] "RemoveContainer" containerID="4a22a49fb5ce65461b0b377a3d52609ecf4a1ff09a43966ceaa98314c4a6d9d8" Feb 01 07:49:33 crc kubenswrapper[4835]: I0201 07:49:33.568609 4835 scope.go:117] "RemoveContainer" containerID="0d957b4b4419b3ad0555e1431ee8a63c3430d586ceef39de00ff73272ceae03e" Feb 01 07:49:33 crc kubenswrapper[4835]: E0201 07:49:33.568941 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:49:34 crc kubenswrapper[4835]: I0201 07:49:34.018915 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:49:34 crc kubenswrapper[4835]: I0201 07:49:34.459269 4835 scope.go:117] "RemoveContainer" containerID="ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42" Feb 01 07:49:34 crc kubenswrapper[4835]: E0201 07:49:34.459978 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:49:34 crc kubenswrapper[4835]: I0201 07:49:34.460991 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtg6f" event={"ID":"124384c0-3e99-4689-bccb-5f0d29df89ee","Type":"ContainerStarted","Data":"b18ce1b7bb1b37bfd65553f9a9bab8d7febc0f61ac885505642707b190975974"} Feb 01 07:49:34 crc kubenswrapper[4835]: I0201 07:49:34.498905 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xtg6f" podStartSLOduration=2.914828404 podStartE2EDuration="5.498887575s" podCreationTimestamp="2026-02-01 07:49:29 +0000 UTC" firstStartedPulling="2026-02-01 07:49:31.382931709 +0000 UTC m=+1644.503368143" lastFinishedPulling="2026-02-01 07:49:33.96699086 +0000 UTC m=+1647.087427314" observedRunningTime="2026-02-01 07:49:34.491206405 +0000 UTC m=+1647.611642839" watchObservedRunningTime="2026-02-01 07:49:34.498887575 +0000 UTC m=+1647.619324009" Feb 01 07:49:35 crc kubenswrapper[4835]: I0201 07:49:35.470154 4835 scope.go:117] "RemoveContainer" containerID="ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42" Feb 01 07:49:35 crc kubenswrapper[4835]: E0201 07:49:35.470611 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:49:35 crc kubenswrapper[4835]: I0201 07:49:35.566700 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:49:35 crc kubenswrapper[4835]: E0201 07:49:35.566941 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:49:36 crc kubenswrapper[4835]: I0201 07:49:36.566922 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:49:36 crc kubenswrapper[4835]: I0201 07:49:36.567281 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:49:36 crc kubenswrapper[4835]: E0201 07:49:36.567511 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:49:37 crc kubenswrapper[4835]: I0201 07:49:37.022947 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:40 crc kubenswrapper[4835]: I0201 07:49:40.020761 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:40 crc kubenswrapper[4835]: I0201 07:49:40.021147 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:40 crc kubenswrapper[4835]: I0201 07:49:40.237637 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:40 crc kubenswrapper[4835]: I0201 07:49:40.237739 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:41 crc kubenswrapper[4835]: I0201 07:49:41.299788 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xtg6f" podUID="124384c0-3e99-4689-bccb-5f0d29df89ee" containerName="registry-server" probeResult="failure" output=< Feb 01 07:49:41 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Feb 01 07:49:41 crc kubenswrapper[4835]: > Feb 01 07:49:43 crc kubenswrapper[4835]: I0201 07:49:43.021439 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:43 crc kubenswrapper[4835]: I0201 07:49:43.021565 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:49:43 crc kubenswrapper[4835]: I0201 07:49:43.022547 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"84397ad55d99116c1a2942bd910f7ca4d56420e65b59c11c397c729684823cf9"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:49:43 crc kubenswrapper[4835]: I0201 07:49:43.022607 4835 scope.go:117] "RemoveContainer" containerID="ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42" Feb 01 07:49:43 crc kubenswrapper[4835]: I0201 07:49:43.022658 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://84397ad55d99116c1a2942bd910f7ca4d56420e65b59c11c397c729684823cf9" gracePeriod=30 Feb 01 07:49:43 crc kubenswrapper[4835]: I0201 07:49:43.024979 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:49:43 crc kubenswrapper[4835]: I0201 07:49:43.553533 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="84397ad55d99116c1a2942bd910f7ca4d56420e65b59c11c397c729684823cf9" exitCode=0 Feb 01 07:49:43 crc kubenswrapper[4835]: I0201 07:49:43.553635 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"84397ad55d99116c1a2942bd910f7ca4d56420e65b59c11c397c729684823cf9"} Feb 01 07:49:43 crc kubenswrapper[4835]: I0201 07:49:43.554209 4835 scope.go:117] "RemoveContainer" containerID="dfcbc8158540e8b14b8f031f0ed70eccc3b8694b265776d8471950ed2ff440a3" Feb 01 07:49:43 crc kubenswrapper[4835]: E0201 07:49:43.655587 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:49:44 crc kubenswrapper[4835]: I0201 07:49:44.566737 4835 scope.go:117] "RemoveContainer" containerID="84397ad55d99116c1a2942bd910f7ca4d56420e65b59c11c397c729684823cf9" Feb 01 07:49:44 crc kubenswrapper[4835]: I0201 07:49:44.567533 4835 scope.go:117] "RemoveContainer" containerID="ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42" Feb 01 07:49:44 crc kubenswrapper[4835]: E0201 07:49:44.567890 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:49:45 crc kubenswrapper[4835]: I0201 07:49:45.567459 4835 scope.go:117] "RemoveContainer" containerID="fff9c47d9f9751dda6b5a7766119279bc23c2f2edc3650a927e4a08bcbc7e47a" Feb 01 07:49:45 crc kubenswrapper[4835]: I0201 07:49:45.569527 4835 scope.go:117] "RemoveContainer" containerID="40689cac6b013611eb6f85e7cfc6082a3c8887e457da164c307a5d7ce31cf40b" Feb 01 07:49:45 crc kubenswrapper[4835]: I0201 07:49:45.569878 4835 scope.go:117] "RemoveContainer" containerID="4a22a49fb5ce65461b0b377a3d52609ecf4a1ff09a43966ceaa98314c4a6d9d8" Feb 01 07:49:45 crc kubenswrapper[4835]: I0201 07:49:45.570152 4835 scope.go:117] "RemoveContainer" containerID="0d957b4b4419b3ad0555e1431ee8a63c3430d586ceef39de00ff73272ceae03e" Feb 01 07:49:46 crc kubenswrapper[4835]: I0201 07:49:46.599902 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="540727307e8978e8c43e8a0a4f7ec6ce8bdbeefaa7fe9819f13948bd386f35c3" exitCode=1 Feb 01 07:49:46 crc kubenswrapper[4835]: I0201 07:49:46.600217 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="477a564d57eee0ea6652a8c54898c8c81b66cca6bc9faa6c189ad37617c9ddaa" exitCode=1 Feb 01 07:49:46 crc kubenswrapper[4835]: I0201 07:49:46.599974 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"51de1435c23f559ffe6911415b24cbe68db6d22980b3e394e286a72d3924e3cd"} Feb 01 07:49:46 crc kubenswrapper[4835]: I0201 07:49:46.600253 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"1e29784f7a1f3e2e2cbf285d4e3f6c30a3fa736d61daa032e2a2f6ada94b8bcd"} Feb 01 07:49:46 crc kubenswrapper[4835]: I0201 07:49:46.600267 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"540727307e8978e8c43e8a0a4f7ec6ce8bdbeefaa7fe9819f13948bd386f35c3"} Feb 01 07:49:46 crc kubenswrapper[4835]: I0201 07:49:46.600279 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"477a564d57eee0ea6652a8c54898c8c81b66cca6bc9faa6c189ad37617c9ddaa"} Feb 01 07:49:46 crc kubenswrapper[4835]: I0201 07:49:46.600295 4835 scope.go:117] "RemoveContainer" containerID="40689cac6b013611eb6f85e7cfc6082a3c8887e457da164c307a5d7ce31cf40b" Feb 01 07:49:46 crc kubenswrapper[4835]: I0201 07:49:46.600940 4835 scope.go:117] "RemoveContainer" containerID="477a564d57eee0ea6652a8c54898c8c81b66cca6bc9faa6c189ad37617c9ddaa" Feb 01 07:49:46 crc kubenswrapper[4835]: I0201 07:49:46.600999 4835 scope.go:117] "RemoveContainer" containerID="540727307e8978e8c43e8a0a4f7ec6ce8bdbeefaa7fe9819f13948bd386f35c3" Feb 01 07:49:46 crc kubenswrapper[4835]: E0201 07:49:46.601315 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:49:46 crc kubenswrapper[4835]: I0201 07:49:46.676910 4835 scope.go:117] "RemoveContainer" containerID="fff9c47d9f9751dda6b5a7766119279bc23c2f2edc3650a927e4a08bcbc7e47a" Feb 01 07:49:47 crc kubenswrapper[4835]: I0201 07:49:47.622898 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="51de1435c23f559ffe6911415b24cbe68db6d22980b3e394e286a72d3924e3cd" exitCode=1 Feb 01 07:49:47 crc kubenswrapper[4835]: I0201 07:49:47.622951 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="1e29784f7a1f3e2e2cbf285d4e3f6c30a3fa736d61daa032e2a2f6ada94b8bcd" exitCode=1 Feb 01 07:49:47 crc kubenswrapper[4835]: I0201 07:49:47.622988 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"51de1435c23f559ffe6911415b24cbe68db6d22980b3e394e286a72d3924e3cd"} Feb 01 07:49:47 crc kubenswrapper[4835]: I0201 07:49:47.623087 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"1e29784f7a1f3e2e2cbf285d4e3f6c30a3fa736d61daa032e2a2f6ada94b8bcd"} Feb 01 07:49:47 crc kubenswrapper[4835]: I0201 07:49:47.623131 4835 scope.go:117] "RemoveContainer" containerID="0d957b4b4419b3ad0555e1431ee8a63c3430d586ceef39de00ff73272ceae03e" Feb 01 07:49:47 crc kubenswrapper[4835]: I0201 07:49:47.624485 4835 scope.go:117] "RemoveContainer" containerID="477a564d57eee0ea6652a8c54898c8c81b66cca6bc9faa6c189ad37617c9ddaa" Feb 01 07:49:47 crc kubenswrapper[4835]: I0201 07:49:47.624654 4835 scope.go:117] "RemoveContainer" containerID="540727307e8978e8c43e8a0a4f7ec6ce8bdbeefaa7fe9819f13948bd386f35c3" Feb 01 07:49:47 crc kubenswrapper[4835]: I0201 07:49:47.624922 4835 scope.go:117] "RemoveContainer" containerID="1e29784f7a1f3e2e2cbf285d4e3f6c30a3fa736d61daa032e2a2f6ada94b8bcd" Feb 01 07:49:47 crc kubenswrapper[4835]: I0201 07:49:47.625059 4835 scope.go:117] "RemoveContainer" containerID="51de1435c23f559ffe6911415b24cbe68db6d22980b3e394e286a72d3924e3cd" Feb 01 07:49:47 crc kubenswrapper[4835]: E0201 07:49:47.625902 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:49:47 crc kubenswrapper[4835]: I0201 07:49:47.689470 4835 scope.go:117] "RemoveContainer" containerID="4a22a49fb5ce65461b0b377a3d52609ecf4a1ff09a43966ceaa98314c4a6d9d8" Feb 01 07:49:49 crc kubenswrapper[4835]: I0201 07:49:49.566495 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:49:49 crc kubenswrapper[4835]: E0201 07:49:49.567017 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:49:50 crc kubenswrapper[4835]: I0201 07:49:50.290420 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:50 crc kubenswrapper[4835]: I0201 07:49:50.340468 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:50 crc kubenswrapper[4835]: I0201 07:49:50.539774 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xtg6f"] Feb 01 07:49:51 crc kubenswrapper[4835]: I0201 07:49:51.567916 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:49:51 crc kubenswrapper[4835]: I0201 07:49:51.567971 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:49:51 crc kubenswrapper[4835]: E0201 07:49:51.568377 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:49:51 crc kubenswrapper[4835]: I0201 07:49:51.680959 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="fe6bd8e84d6ed5717736c29de8d74a04026b73df093d00dea9d9e4f338cae07c" exitCode=1 Feb 01 07:49:51 crc kubenswrapper[4835]: I0201 07:49:51.681025 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"fe6bd8e84d6ed5717736c29de8d74a04026b73df093d00dea9d9e4f338cae07c"} Feb 01 07:49:51 crc kubenswrapper[4835]: I0201 07:49:51.681649 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xtg6f" podUID="124384c0-3e99-4689-bccb-5f0d29df89ee" containerName="registry-server" containerID="cri-o://b18ce1b7bb1b37bfd65553f9a9bab8d7febc0f61ac885505642707b190975974" gracePeriod=2 Feb 01 07:49:51 crc kubenswrapper[4835]: I0201 07:49:51.682326 4835 scope.go:117] "RemoveContainer" containerID="477a564d57eee0ea6652a8c54898c8c81b66cca6bc9faa6c189ad37617c9ddaa" Feb 01 07:49:51 crc kubenswrapper[4835]: I0201 07:49:51.682546 4835 scope.go:117] "RemoveContainer" containerID="540727307e8978e8c43e8a0a4f7ec6ce8bdbeefaa7fe9819f13948bd386f35c3" Feb 01 07:49:51 crc kubenswrapper[4835]: I0201 07:49:51.682702 4835 scope.go:117] "RemoveContainer" containerID="fe6bd8e84d6ed5717736c29de8d74a04026b73df093d00dea9d9e4f338cae07c" Feb 01 07:49:51 crc kubenswrapper[4835]: I0201 07:49:51.682734 4835 scope.go:117] "RemoveContainer" containerID="1e29784f7a1f3e2e2cbf285d4e3f6c30a3fa736d61daa032e2a2f6ada94b8bcd" Feb 01 07:49:51 crc kubenswrapper[4835]: I0201 07:49:51.682831 4835 scope.go:117] "RemoveContainer" containerID="51de1435c23f559ffe6911415b24cbe68db6d22980b3e394e286a72d3924e3cd" Feb 01 07:49:51 crc kubenswrapper[4835]: E0201 07:49:51.976230 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.163430 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.269010 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/124384c0-3e99-4689-bccb-5f0d29df89ee-catalog-content\") pod \"124384c0-3e99-4689-bccb-5f0d29df89ee\" (UID: \"124384c0-3e99-4689-bccb-5f0d29df89ee\") " Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.269111 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbqn5\" (UniqueName: \"kubernetes.io/projected/124384c0-3e99-4689-bccb-5f0d29df89ee-kube-api-access-tbqn5\") pod \"124384c0-3e99-4689-bccb-5f0d29df89ee\" (UID: \"124384c0-3e99-4689-bccb-5f0d29df89ee\") " Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.269237 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/124384c0-3e99-4689-bccb-5f0d29df89ee-utilities\") pod \"124384c0-3e99-4689-bccb-5f0d29df89ee\" (UID: \"124384c0-3e99-4689-bccb-5f0d29df89ee\") " Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.271695 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/124384c0-3e99-4689-bccb-5f0d29df89ee-utilities" (OuterVolumeSpecName: "utilities") pod "124384c0-3e99-4689-bccb-5f0d29df89ee" (UID: "124384c0-3e99-4689-bccb-5f0d29df89ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.275719 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/124384c0-3e99-4689-bccb-5f0d29df89ee-kube-api-access-tbqn5" (OuterVolumeSpecName: "kube-api-access-tbqn5") pod "124384c0-3e99-4689-bccb-5f0d29df89ee" (UID: "124384c0-3e99-4689-bccb-5f0d29df89ee"). InnerVolumeSpecName "kube-api-access-tbqn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.371623 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbqn5\" (UniqueName: \"kubernetes.io/projected/124384c0-3e99-4689-bccb-5f0d29df89ee-kube-api-access-tbqn5\") on node \"crc\" DevicePath \"\"" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.371668 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/124384c0-3e99-4689-bccb-5f0d29df89ee-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.416629 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/124384c0-3e99-4689-bccb-5f0d29df89ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "124384c0-3e99-4689-bccb-5f0d29df89ee" (UID: "124384c0-3e99-4689-bccb-5f0d29df89ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.472767 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/124384c0-3e99-4689-bccb-5f0d29df89ee-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.692459 4835 generic.go:334] "Generic (PLEG): container finished" podID="124384c0-3e99-4689-bccb-5f0d29df89ee" containerID="b18ce1b7bb1b37bfd65553f9a9bab8d7febc0f61ac885505642707b190975974" exitCode=0 Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.692537 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtg6f" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.692566 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtg6f" event={"ID":"124384c0-3e99-4689-bccb-5f0d29df89ee","Type":"ContainerDied","Data":"b18ce1b7bb1b37bfd65553f9a9bab8d7febc0f61ac885505642707b190975974"} Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.692655 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtg6f" event={"ID":"124384c0-3e99-4689-bccb-5f0d29df89ee","Type":"ContainerDied","Data":"7d22d53e511d179e4981719f8ecee51bc2645b3c446221ec04b613a6ca27ca6b"} Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.692690 4835 scope.go:117] "RemoveContainer" containerID="b18ce1b7bb1b37bfd65553f9a9bab8d7febc0f61ac885505642707b190975974" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.721666 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"9760d7167271d692b8a511dedaf5143643873c09e285f761e1c84b1ed0a4fc66"} Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.722399 4835 scope.go:117] "RemoveContainer" containerID="477a564d57eee0ea6652a8c54898c8c81b66cca6bc9faa6c189ad37617c9ddaa" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.722527 4835 scope.go:117] "RemoveContainer" containerID="540727307e8978e8c43e8a0a4f7ec6ce8bdbeefaa7fe9819f13948bd386f35c3" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.722681 4835 scope.go:117] "RemoveContainer" containerID="1e29784f7a1f3e2e2cbf285d4e3f6c30a3fa736d61daa032e2a2f6ada94b8bcd" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.722857 4835 scope.go:117] "RemoveContainer" containerID="51de1435c23f559ffe6911415b24cbe68db6d22980b3e394e286a72d3924e3cd" Feb 01 07:49:52 crc kubenswrapper[4835]: E0201 07:49:52.723760 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.740583 4835 scope.go:117] "RemoveContainer" containerID="6380b3f2fd17c078ccd2827cbc2d6f324f8e7503a1a93babd7807fd95707479b" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.751574 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xtg6f"] Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.761563 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xtg6f"] Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.772266 4835 scope.go:117] "RemoveContainer" containerID="7734274ed7d5cd73d5f9493526243959230d16ca4615808132012c7c9f7ca0ca" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.809468 4835 scope.go:117] "RemoveContainer" containerID="b18ce1b7bb1b37bfd65553f9a9bab8d7febc0f61ac885505642707b190975974" Feb 01 07:49:52 crc kubenswrapper[4835]: E0201 07:49:52.810118 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b18ce1b7bb1b37bfd65553f9a9bab8d7febc0f61ac885505642707b190975974\": container with ID starting with b18ce1b7bb1b37bfd65553f9a9bab8d7febc0f61ac885505642707b190975974 not found: ID does not exist" containerID="b18ce1b7bb1b37bfd65553f9a9bab8d7febc0f61ac885505642707b190975974" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.810189 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b18ce1b7bb1b37bfd65553f9a9bab8d7febc0f61ac885505642707b190975974"} err="failed to get container status \"b18ce1b7bb1b37bfd65553f9a9bab8d7febc0f61ac885505642707b190975974\": rpc error: code = NotFound desc = could not find container \"b18ce1b7bb1b37bfd65553f9a9bab8d7febc0f61ac885505642707b190975974\": container with ID starting with b18ce1b7bb1b37bfd65553f9a9bab8d7febc0f61ac885505642707b190975974 not found: ID does not exist" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.810229 4835 scope.go:117] "RemoveContainer" containerID="6380b3f2fd17c078ccd2827cbc2d6f324f8e7503a1a93babd7807fd95707479b" Feb 01 07:49:52 crc kubenswrapper[4835]: E0201 07:49:52.811810 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6380b3f2fd17c078ccd2827cbc2d6f324f8e7503a1a93babd7807fd95707479b\": container with ID starting with 6380b3f2fd17c078ccd2827cbc2d6f324f8e7503a1a93babd7807fd95707479b not found: ID does not exist" containerID="6380b3f2fd17c078ccd2827cbc2d6f324f8e7503a1a93babd7807fd95707479b" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.812101 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6380b3f2fd17c078ccd2827cbc2d6f324f8e7503a1a93babd7807fd95707479b"} err="failed to get container status \"6380b3f2fd17c078ccd2827cbc2d6f324f8e7503a1a93babd7807fd95707479b\": rpc error: code = NotFound desc = could not find container \"6380b3f2fd17c078ccd2827cbc2d6f324f8e7503a1a93babd7807fd95707479b\": container with ID starting with 6380b3f2fd17c078ccd2827cbc2d6f324f8e7503a1a93babd7807fd95707479b not found: ID does not exist" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.812182 4835 scope.go:117] "RemoveContainer" containerID="7734274ed7d5cd73d5f9493526243959230d16ca4615808132012c7c9f7ca0ca" Feb 01 07:49:52 crc kubenswrapper[4835]: E0201 07:49:52.812751 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7734274ed7d5cd73d5f9493526243959230d16ca4615808132012c7c9f7ca0ca\": container with ID starting with 7734274ed7d5cd73d5f9493526243959230d16ca4615808132012c7c9f7ca0ca not found: ID does not exist" containerID="7734274ed7d5cd73d5f9493526243959230d16ca4615808132012c7c9f7ca0ca" Feb 01 07:49:52 crc kubenswrapper[4835]: I0201 07:49:52.812800 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7734274ed7d5cd73d5f9493526243959230d16ca4615808132012c7c9f7ca0ca"} err="failed to get container status \"7734274ed7d5cd73d5f9493526243959230d16ca4615808132012c7c9f7ca0ca\": rpc error: code = NotFound desc = could not find container \"7734274ed7d5cd73d5f9493526243959230d16ca4615808132012c7c9f7ca0ca\": container with ID starting with 7734274ed7d5cd73d5f9493526243959230d16ca4615808132012c7c9f7ca0ca not found: ID does not exist" Feb 01 07:49:53 crc kubenswrapper[4835]: I0201 07:49:53.582753 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="124384c0-3e99-4689-bccb-5f0d29df89ee" path="/var/lib/kubelet/pods/124384c0-3e99-4689-bccb-5f0d29df89ee/volumes" Feb 01 07:49:56 crc kubenswrapper[4835]: I0201 07:49:56.568136 4835 scope.go:117] "RemoveContainer" containerID="84397ad55d99116c1a2942bd910f7ca4d56420e65b59c11c397c729684823cf9" Feb 01 07:49:56 crc kubenswrapper[4835]: I0201 07:49:56.568198 4835 scope.go:117] "RemoveContainer" containerID="ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42" Feb 01 07:49:56 crc kubenswrapper[4835]: E0201 07:49:56.568625 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:50:02 crc kubenswrapper[4835]: I0201 07:50:02.567260 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:50:02 crc kubenswrapper[4835]: E0201 07:50:02.569526 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:50:03 crc kubenswrapper[4835]: I0201 07:50:03.567531 4835 scope.go:117] "RemoveContainer" containerID="477a564d57eee0ea6652a8c54898c8c81b66cca6bc9faa6c189ad37617c9ddaa" Feb 01 07:50:03 crc kubenswrapper[4835]: I0201 07:50:03.567899 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:50:03 crc kubenswrapper[4835]: I0201 07:50:03.567937 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:50:03 crc kubenswrapper[4835]: I0201 07:50:03.568088 4835 scope.go:117] "RemoveContainer" containerID="540727307e8978e8c43e8a0a4f7ec6ce8bdbeefaa7fe9819f13948bd386f35c3" Feb 01 07:50:03 crc kubenswrapper[4835]: I0201 07:50:03.568278 4835 scope.go:117] "RemoveContainer" containerID="1e29784f7a1f3e2e2cbf285d4e3f6c30a3fa736d61daa032e2a2f6ada94b8bcd" Feb 01 07:50:03 crc kubenswrapper[4835]: I0201 07:50:03.568326 4835 scope.go:117] "RemoveContainer" containerID="51de1435c23f559ffe6911415b24cbe68db6d22980b3e394e286a72d3924e3cd" Feb 01 07:50:03 crc kubenswrapper[4835]: E0201 07:50:03.568375 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:50:03 crc kubenswrapper[4835]: E0201 07:50:03.568750 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:50:08 crc kubenswrapper[4835]: I0201 07:50:08.571531 4835 scope.go:117] "RemoveContainer" containerID="84397ad55d99116c1a2942bd910f7ca4d56420e65b59c11c397c729684823cf9" Feb 01 07:50:08 crc kubenswrapper[4835]: I0201 07:50:08.571584 4835 scope.go:117] "RemoveContainer" containerID="ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42" Feb 01 07:50:08 crc kubenswrapper[4835]: E0201 07:50:08.825959 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:50:08 crc kubenswrapper[4835]: I0201 07:50:08.883687 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"13725799dc5e4cb1af200cf5a41607f4dffcb3ee9a7f61f63c6908ebaeb72074"} Feb 01 07:50:08 crc kubenswrapper[4835]: I0201 07:50:08.884031 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:50:08 crc kubenswrapper[4835]: I0201 07:50:08.884579 4835 scope.go:117] "RemoveContainer" containerID="ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42" Feb 01 07:50:08 crc kubenswrapper[4835]: E0201 07:50:08.884891 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:50:09 crc kubenswrapper[4835]: I0201 07:50:09.895079 4835 scope.go:117] "RemoveContainer" containerID="ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42" Feb 01 07:50:09 crc kubenswrapper[4835]: E0201 07:50:09.895407 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:50:13 crc kubenswrapper[4835]: I0201 07:50:13.023099 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:50:15 crc kubenswrapper[4835]: I0201 07:50:15.021886 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:50:15 crc kubenswrapper[4835]: I0201 07:50:15.568442 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:50:15 crc kubenswrapper[4835]: I0201 07:50:15.568917 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:50:15 crc kubenswrapper[4835]: E0201 07:50:15.569721 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:50:16 crc kubenswrapper[4835]: I0201 07:50:16.021079 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:50:16 crc kubenswrapper[4835]: I0201 07:50:16.567685 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:50:16 crc kubenswrapper[4835]: I0201 07:50:16.568235 4835 scope.go:117] "RemoveContainer" containerID="477a564d57eee0ea6652a8c54898c8c81b66cca6bc9faa6c189ad37617c9ddaa" Feb 01 07:50:16 crc kubenswrapper[4835]: I0201 07:50:16.568353 4835 scope.go:117] "RemoveContainer" containerID="540727307e8978e8c43e8a0a4f7ec6ce8bdbeefaa7fe9819f13948bd386f35c3" Feb 01 07:50:16 crc kubenswrapper[4835]: I0201 07:50:16.568591 4835 scope.go:117] "RemoveContainer" containerID="1e29784f7a1f3e2e2cbf285d4e3f6c30a3fa736d61daa032e2a2f6ada94b8bcd" Feb 01 07:50:16 crc kubenswrapper[4835]: E0201 07:50:16.568642 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:50:16 crc kubenswrapper[4835]: I0201 07:50:16.568662 4835 scope.go:117] "RemoveContainer" containerID="51de1435c23f559ffe6911415b24cbe68db6d22980b3e394e286a72d3924e3cd" Feb 01 07:50:16 crc kubenswrapper[4835]: E0201 07:50:16.569121 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:50:19 crc kubenswrapper[4835]: I0201 07:50:19.021599 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:50:19 crc kubenswrapper[4835]: I0201 07:50:19.021713 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:50:19 crc kubenswrapper[4835]: I0201 07:50:19.022714 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"13725799dc5e4cb1af200cf5a41607f4dffcb3ee9a7f61f63c6908ebaeb72074"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:50:19 crc kubenswrapper[4835]: I0201 07:50:19.022750 4835 scope.go:117] "RemoveContainer" containerID="ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42" Feb 01 07:50:19 crc kubenswrapper[4835]: I0201 07:50:19.022790 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://13725799dc5e4cb1af200cf5a41607f4dffcb3ee9a7f61f63c6908ebaeb72074" gracePeriod=30 Feb 01 07:50:19 crc kubenswrapper[4835]: I0201 07:50:19.024180 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:50:19 crc kubenswrapper[4835]: E0201 07:50:19.377869 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:50:20 crc kubenswrapper[4835]: I0201 07:50:20.003791 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="13725799dc5e4cb1af200cf5a41607f4dffcb3ee9a7f61f63c6908ebaeb72074" exitCode=0 Feb 01 07:50:20 crc kubenswrapper[4835]: I0201 07:50:20.003894 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"13725799dc5e4cb1af200cf5a41607f4dffcb3ee9a7f61f63c6908ebaeb72074"} Feb 01 07:50:20 crc kubenswrapper[4835]: I0201 07:50:20.003953 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415"} Feb 01 07:50:20 crc kubenswrapper[4835]: I0201 07:50:20.003989 4835 scope.go:117] "RemoveContainer" containerID="84397ad55d99116c1a2942bd910f7ca4d56420e65b59c11c397c729684823cf9" Feb 01 07:50:20 crc kubenswrapper[4835]: I0201 07:50:20.004364 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:50:20 crc kubenswrapper[4835]: I0201 07:50:20.004777 4835 scope.go:117] "RemoveContainer" containerID="13725799dc5e4cb1af200cf5a41607f4dffcb3ee9a7f61f63c6908ebaeb72074" Feb 01 07:50:20 crc kubenswrapper[4835]: E0201 07:50:20.005019 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:50:21 crc kubenswrapper[4835]: I0201 07:50:21.017841 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" exitCode=1 Feb 01 07:50:21 crc kubenswrapper[4835]: I0201 07:50:21.018174 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415"} Feb 01 07:50:21 crc kubenswrapper[4835]: I0201 07:50:21.018371 4835 scope.go:117] "RemoveContainer" containerID="ad09c849675c188dd2406d4627b033030acfdcc27f8e162db38425cedb1a3d42" Feb 01 07:50:21 crc kubenswrapper[4835]: I0201 07:50:21.019601 4835 scope.go:117] "RemoveContainer" containerID="13725799dc5e4cb1af200cf5a41607f4dffcb3ee9a7f61f63c6908ebaeb72074" Feb 01 07:50:21 crc kubenswrapper[4835]: I0201 07:50:21.019642 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:50:21 crc kubenswrapper[4835]: E0201 07:50:21.020290 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:50:22 crc kubenswrapper[4835]: I0201 07:50:22.019360 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:50:22 crc kubenswrapper[4835]: I0201 07:50:22.031980 4835 scope.go:117] "RemoveContainer" containerID="13725799dc5e4cb1af200cf5a41607f4dffcb3ee9a7f61f63c6908ebaeb72074" Feb 01 07:50:22 crc kubenswrapper[4835]: I0201 07:50:22.032062 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:50:22 crc kubenswrapper[4835]: E0201 07:50:22.032572 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:50:23 crc kubenswrapper[4835]: I0201 07:50:23.044816 4835 scope.go:117] "RemoveContainer" containerID="13725799dc5e4cb1af200cf5a41607f4dffcb3ee9a7f61f63c6908ebaeb72074" Feb 01 07:50:23 crc kubenswrapper[4835]: I0201 07:50:23.044864 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:50:23 crc kubenswrapper[4835]: E0201 07:50:23.045515 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:50:27 crc kubenswrapper[4835]: I0201 07:50:27.096978 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="9760d7167271d692b8a511dedaf5143643873c09e285f761e1c84b1ed0a4fc66" exitCode=1 Feb 01 07:50:27 crc kubenswrapper[4835]: I0201 07:50:27.097094 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"9760d7167271d692b8a511dedaf5143643873c09e285f761e1c84b1ed0a4fc66"} Feb 01 07:50:27 crc kubenswrapper[4835]: I0201 07:50:27.097476 4835 scope.go:117] "RemoveContainer" containerID="fe6bd8e84d6ed5717736c29de8d74a04026b73df093d00dea9d9e4f338cae07c" Feb 01 07:50:27 crc kubenswrapper[4835]: I0201 07:50:27.099052 4835 scope.go:117] "RemoveContainer" containerID="477a564d57eee0ea6652a8c54898c8c81b66cca6bc9faa6c189ad37617c9ddaa" Feb 01 07:50:27 crc kubenswrapper[4835]: I0201 07:50:27.099170 4835 scope.go:117] "RemoveContainer" containerID="540727307e8978e8c43e8a0a4f7ec6ce8bdbeefaa7fe9819f13948bd386f35c3" Feb 01 07:50:27 crc kubenswrapper[4835]: I0201 07:50:27.099318 4835 scope.go:117] "RemoveContainer" containerID="9760d7167271d692b8a511dedaf5143643873c09e285f761e1c84b1ed0a4fc66" Feb 01 07:50:27 crc kubenswrapper[4835]: I0201 07:50:27.099389 4835 scope.go:117] "RemoveContainer" containerID="1e29784f7a1f3e2e2cbf285d4e3f6c30a3fa736d61daa032e2a2f6ada94b8bcd" Feb 01 07:50:27 crc kubenswrapper[4835]: I0201 07:50:27.099610 4835 scope.go:117] "RemoveContainer" containerID="51de1435c23f559ffe6911415b24cbe68db6d22980b3e394e286a72d3924e3cd" Feb 01 07:50:28 crc kubenswrapper[4835]: I0201 07:50:28.112773 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9" exitCode=1 Feb 01 07:50:28 crc kubenswrapper[4835]: I0201 07:50:28.113376 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c"} Feb 01 07:50:28 crc kubenswrapper[4835]: I0201 07:50:28.113401 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb"} Feb 01 07:50:28 crc kubenswrapper[4835]: I0201 07:50:28.113435 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9"} Feb 01 07:50:28 crc kubenswrapper[4835]: I0201 07:50:28.113461 4835 scope.go:117] "RemoveContainer" containerID="477a564d57eee0ea6652a8c54898c8c81b66cca6bc9faa6c189ad37617c9ddaa" Feb 01 07:50:28 crc kubenswrapper[4835]: E0201 07:50:28.317790 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.136778 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c" exitCode=1 Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.136913 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb" exitCode=1 Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.136938 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9" exitCode=1 Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.136859 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c"} Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.136992 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb"} Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.137020 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9"} Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.137049 4835 scope.go:117] "RemoveContainer" containerID="1e29784f7a1f3e2e2cbf285d4e3f6c30a3fa736d61daa032e2a2f6ada94b8bcd" Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.138453 4835 scope.go:117] "RemoveContainer" containerID="6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9" Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.138884 4835 scope.go:117] "RemoveContainer" containerID="ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb" Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.139107 4835 scope.go:117] "RemoveContainer" containerID="9760d7167271d692b8a511dedaf5143643873c09e285f761e1c84b1ed0a4fc66" Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.139147 4835 scope.go:117] "RemoveContainer" containerID="ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c" Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.139255 4835 scope.go:117] "RemoveContainer" containerID="b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9" Feb 01 07:50:29 crc kubenswrapper[4835]: E0201 07:50:29.153015 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.211990 4835 scope.go:117] "RemoveContainer" containerID="540727307e8978e8c43e8a0a4f7ec6ce8bdbeefaa7fe9819f13948bd386f35c3" Feb 01 07:50:29 crc kubenswrapper[4835]: I0201 07:50:29.258510 4835 scope.go:117] "RemoveContainer" containerID="51de1435c23f559ffe6911415b24cbe68db6d22980b3e394e286a72d3924e3cd" Feb 01 07:50:30 crc kubenswrapper[4835]: I0201 07:50:30.158578 4835 scope.go:117] "RemoveContainer" containerID="6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9" Feb 01 07:50:30 crc kubenswrapper[4835]: I0201 07:50:30.159139 4835 scope.go:117] "RemoveContainer" containerID="ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb" Feb 01 07:50:30 crc kubenswrapper[4835]: I0201 07:50:30.159293 4835 scope.go:117] "RemoveContainer" containerID="9760d7167271d692b8a511dedaf5143643873c09e285f761e1c84b1ed0a4fc66" Feb 01 07:50:30 crc kubenswrapper[4835]: I0201 07:50:30.159309 4835 scope.go:117] "RemoveContainer" containerID="ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c" Feb 01 07:50:30 crc kubenswrapper[4835]: I0201 07:50:30.159368 4835 scope.go:117] "RemoveContainer" containerID="b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9" Feb 01 07:50:30 crc kubenswrapper[4835]: E0201 07:50:30.159940 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:50:30 crc kubenswrapper[4835]: I0201 07:50:30.567156 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:50:30 crc kubenswrapper[4835]: I0201 07:50:30.567487 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:50:30 crc kubenswrapper[4835]: I0201 07:50:30.567733 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:50:30 crc kubenswrapper[4835]: E0201 07:50:30.568102 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:50:30 crc kubenswrapper[4835]: E0201 07:50:30.568599 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:50:36 crc kubenswrapper[4835]: I0201 07:50:36.567497 4835 scope.go:117] "RemoveContainer" containerID="13725799dc5e4cb1af200cf5a41607f4dffcb3ee9a7f61f63c6908ebaeb72074" Feb 01 07:50:36 crc kubenswrapper[4835]: I0201 07:50:36.568282 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:50:36 crc kubenswrapper[4835]: E0201 07:50:36.568719 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:50:43 crc kubenswrapper[4835]: I0201 07:50:43.567846 4835 scope.go:117] "RemoveContainer" containerID="6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9" Feb 01 07:50:43 crc kubenswrapper[4835]: I0201 07:50:43.568637 4835 scope.go:117] "RemoveContainer" containerID="ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb" Feb 01 07:50:43 crc kubenswrapper[4835]: I0201 07:50:43.568791 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:50:43 crc kubenswrapper[4835]: I0201 07:50:43.568850 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:50:43 crc kubenswrapper[4835]: I0201 07:50:43.568890 4835 scope.go:117] "RemoveContainer" containerID="9760d7167271d692b8a511dedaf5143643873c09e285f761e1c84b1ed0a4fc66" Feb 01 07:50:43 crc kubenswrapper[4835]: I0201 07:50:43.568909 4835 scope.go:117] "RemoveContainer" containerID="ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c" Feb 01 07:50:43 crc kubenswrapper[4835]: I0201 07:50:43.569000 4835 scope.go:117] "RemoveContainer" containerID="b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9" Feb 01 07:50:43 crc kubenswrapper[4835]: E0201 07:50:43.778091 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:50:43 crc kubenswrapper[4835]: E0201 07:50:43.847876 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:50:44 crc kubenswrapper[4835]: I0201 07:50:44.299355 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"e991ff7c18a37a0b32c5c7dec8751c4f0440dc18ad1b6d1b714cb78fcfbe4dcc"} Feb 01 07:50:44 crc kubenswrapper[4835]: I0201 07:50:44.299649 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:50:44 crc kubenswrapper[4835]: I0201 07:50:44.299961 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:50:44 crc kubenswrapper[4835]: E0201 07:50:44.300267 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:50:44 crc kubenswrapper[4835]: I0201 07:50:44.318893 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"7e996cce6d01e8d3083a03c89344fa5e2e5fa37ac118b8a6c148b0b9b7355967"} Feb 01 07:50:44 crc kubenswrapper[4835]: I0201 07:50:44.319671 4835 scope.go:117] "RemoveContainer" containerID="6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9" Feb 01 07:50:44 crc kubenswrapper[4835]: I0201 07:50:44.319752 4835 scope.go:117] "RemoveContainer" containerID="ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb" Feb 01 07:50:44 crc kubenswrapper[4835]: I0201 07:50:44.319864 4835 scope.go:117] "RemoveContainer" containerID="ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c" Feb 01 07:50:44 crc kubenswrapper[4835]: I0201 07:50:44.319905 4835 scope.go:117] "RemoveContainer" containerID="b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9" Feb 01 07:50:44 crc kubenswrapper[4835]: E0201 07:50:44.320206 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:50:44 crc kubenswrapper[4835]: I0201 07:50:44.566568 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:50:44 crc kubenswrapper[4835]: E0201 07:50:44.567081 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:50:45 crc kubenswrapper[4835]: I0201 07:50:45.110488 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:50:45 crc kubenswrapper[4835]: E0201 07:50:45.110656 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:50:45 crc kubenswrapper[4835]: E0201 07:50:45.110801 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:52:47.110742513 +0000 UTC m=+1840.231178987 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:50:45 crc kubenswrapper[4835]: I0201 07:50:45.328924 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:50:45 crc kubenswrapper[4835]: E0201 07:50:45.329268 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:50:46 crc kubenswrapper[4835]: E0201 07:50:46.699197 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 07:50:47 crc kubenswrapper[4835]: I0201 07:50:47.346514 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:50:48 crc kubenswrapper[4835]: I0201 07:50:48.539810 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:50:48 crc kubenswrapper[4835]: I0201 07:50:48.566569 4835 scope.go:117] "RemoveContainer" containerID="13725799dc5e4cb1af200cf5a41607f4dffcb3ee9a7f61f63c6908ebaeb72074" Feb 01 07:50:48 crc kubenswrapper[4835]: I0201 07:50:48.566627 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:50:48 crc kubenswrapper[4835]: E0201 07:50:48.567062 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:50:51 crc kubenswrapper[4835]: I0201 07:50:51.537727 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:50:52 crc kubenswrapper[4835]: I0201 07:50:52.538137 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:50:54 crc kubenswrapper[4835]: I0201 07:50:54.538217 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:50:54 crc kubenswrapper[4835]: I0201 07:50:54.538312 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:50:54 crc kubenswrapper[4835]: I0201 07:50:54.539208 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"e991ff7c18a37a0b32c5c7dec8751c4f0440dc18ad1b6d1b714cb78fcfbe4dcc"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:50:54 crc kubenswrapper[4835]: I0201 07:50:54.539241 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:50:54 crc kubenswrapper[4835]: I0201 07:50:54.539278 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://e991ff7c18a37a0b32c5c7dec8751c4f0440dc18ad1b6d1b714cb78fcfbe4dcc" gracePeriod=30 Feb 01 07:50:54 crc kubenswrapper[4835]: I0201 07:50:54.541280 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:50:54 crc kubenswrapper[4835]: E0201 07:50:54.871977 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:50:55 crc kubenswrapper[4835]: I0201 07:50:55.433384 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="e991ff7c18a37a0b32c5c7dec8751c4f0440dc18ad1b6d1b714cb78fcfbe4dcc" exitCode=0 Feb 01 07:50:55 crc kubenswrapper[4835]: I0201 07:50:55.433487 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"e991ff7c18a37a0b32c5c7dec8751c4f0440dc18ad1b6d1b714cb78fcfbe4dcc"} Feb 01 07:50:55 crc kubenswrapper[4835]: I0201 07:50:55.433528 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652"} Feb 01 07:50:55 crc kubenswrapper[4835]: I0201 07:50:55.433558 4835 scope.go:117] "RemoveContainer" containerID="bc550c00403e30ba12df38404f9902b768425c1c4567d628a65fda0a79990d06" Feb 01 07:50:55 crc kubenswrapper[4835]: I0201 07:50:55.434380 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:50:55 crc kubenswrapper[4835]: E0201 07:50:55.434753 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:50:55 crc kubenswrapper[4835]: I0201 07:50:55.435060 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:50:56 crc kubenswrapper[4835]: I0201 07:50:56.447849 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:50:56 crc kubenswrapper[4835]: E0201 07:50:56.448327 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:50:56 crc kubenswrapper[4835]: I0201 07:50:56.566798 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:50:56 crc kubenswrapper[4835]: E0201 07:50:56.567482 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:50:58 crc kubenswrapper[4835]: I0201 07:50:58.567194 4835 scope.go:117] "RemoveContainer" containerID="6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9" Feb 01 07:50:58 crc kubenswrapper[4835]: I0201 07:50:58.567280 4835 scope.go:117] "RemoveContainer" containerID="ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb" Feb 01 07:50:58 crc kubenswrapper[4835]: I0201 07:50:58.567398 4835 scope.go:117] "RemoveContainer" containerID="ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c" Feb 01 07:50:58 crc kubenswrapper[4835]: I0201 07:50:58.567463 4835 scope.go:117] "RemoveContainer" containerID="b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9" Feb 01 07:50:58 crc kubenswrapper[4835]: E0201 07:50:58.567855 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:50:59 crc kubenswrapper[4835]: I0201 07:50:59.566998 4835 scope.go:117] "RemoveContainer" containerID="13725799dc5e4cb1af200cf5a41607f4dffcb3ee9a7f61f63c6908ebaeb72074" Feb 01 07:50:59 crc kubenswrapper[4835]: I0201 07:50:59.567341 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:50:59 crc kubenswrapper[4835]: E0201 07:50:59.787504 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:51:00 crc kubenswrapper[4835]: I0201 07:51:00.492161 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37"} Feb 01 07:51:00 crc kubenswrapper[4835]: I0201 07:51:00.492965 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:51:00 crc kubenswrapper[4835]: E0201 07:51:00.493449 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:51:00 crc kubenswrapper[4835]: I0201 07:51:00.493560 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:51:00 crc kubenswrapper[4835]: I0201 07:51:00.538340 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:51:01 crc kubenswrapper[4835]: I0201 07:51:01.501636 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:51:01 crc kubenswrapper[4835]: E0201 07:51:01.502218 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:51:02 crc kubenswrapper[4835]: I0201 07:51:02.537460 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:51:03 crc kubenswrapper[4835]: I0201 07:51:03.537542 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:51:04 crc kubenswrapper[4835]: I0201 07:51:04.024511 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:51:05 crc kubenswrapper[4835]: I0201 07:51:05.022287 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:51:06 crc kubenswrapper[4835]: I0201 07:51:06.537570 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:51:06 crc kubenswrapper[4835]: I0201 07:51:06.537705 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:51:06 crc kubenswrapper[4835]: I0201 07:51:06.539008 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:51:06 crc kubenswrapper[4835]: I0201 07:51:06.539062 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:51:06 crc kubenswrapper[4835]: I0201 07:51:06.539216 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" gracePeriod=30 Feb 01 07:51:06 crc kubenswrapper[4835]: I0201 07:51:06.541141 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:51:06 crc kubenswrapper[4835]: E0201 07:51:06.665387 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:51:07 crc kubenswrapper[4835]: I0201 07:51:07.021152 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:51:07 crc kubenswrapper[4835]: I0201 07:51:07.536537 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.95:8080/healthcheck\": dial tcp 10.217.0.95:8080: connect: connection refused" Feb 01 07:51:07 crc kubenswrapper[4835]: I0201 07:51:07.563368 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" exitCode=0 Feb 01 07:51:07 crc kubenswrapper[4835]: I0201 07:51:07.563483 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652"} Feb 01 07:51:07 crc kubenswrapper[4835]: I0201 07:51:07.563560 4835 scope.go:117] "RemoveContainer" containerID="e991ff7c18a37a0b32c5c7dec8751c4f0440dc18ad1b6d1b714cb78fcfbe4dcc" Feb 01 07:51:07 crc kubenswrapper[4835]: I0201 07:51:07.564585 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:51:07 crc kubenswrapper[4835]: I0201 07:51:07.564668 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:51:07 crc kubenswrapper[4835]: E0201 07:51:07.565098 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:51:07 crc kubenswrapper[4835]: I0201 07:51:07.567630 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:51:07 crc kubenswrapper[4835]: E0201 07:51:07.568134 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.020253 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.020606 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.021569 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.022196 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.022240 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.022271 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" gracePeriod=30 Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.023964 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:51:10 crc kubenswrapper[4835]: E0201 07:51:10.150614 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.567288 4835 scope.go:117] "RemoveContainer" containerID="6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9" Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.567383 4835 scope.go:117] "RemoveContainer" containerID="ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb" Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.567544 4835 scope.go:117] "RemoveContainer" containerID="ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c" Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.567602 4835 scope.go:117] "RemoveContainer" containerID="b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9" Feb 01 07:51:10 crc kubenswrapper[4835]: E0201 07:51:10.568140 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.606756 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" exitCode=0 Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.606831 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37"} Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.606900 4835 scope.go:117] "RemoveContainer" containerID="13725799dc5e4cb1af200cf5a41607f4dffcb3ee9a7f61f63c6908ebaeb72074" Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.607643 4835 scope.go:117] "RemoveContainer" containerID="fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" Feb 01 07:51:10 crc kubenswrapper[4835]: I0201 07:51:10.607690 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:51:10 crc kubenswrapper[4835]: E0201 07:51:10.607984 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:51:14 crc kubenswrapper[4835]: I0201 07:51:14.657106 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="f37851d985a30497d9ff14d46c11d28293ba0304df3383819707502eddde0548" exitCode=1 Feb 01 07:51:14 crc kubenswrapper[4835]: I0201 07:51:14.657144 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"f37851d985a30497d9ff14d46c11d28293ba0304df3383819707502eddde0548"} Feb 01 07:51:14 crc kubenswrapper[4835]: I0201 07:51:14.658928 4835 scope.go:117] "RemoveContainer" containerID="6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9" Feb 01 07:51:14 crc kubenswrapper[4835]: I0201 07:51:14.659039 4835 scope.go:117] "RemoveContainer" containerID="ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb" Feb 01 07:51:14 crc kubenswrapper[4835]: I0201 07:51:14.659083 4835 scope.go:117] "RemoveContainer" containerID="f37851d985a30497d9ff14d46c11d28293ba0304df3383819707502eddde0548" Feb 01 07:51:14 crc kubenswrapper[4835]: I0201 07:51:14.659231 4835 scope.go:117] "RemoveContainer" containerID="ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c" Feb 01 07:51:14 crc kubenswrapper[4835]: I0201 07:51:14.659299 4835 scope.go:117] "RemoveContainer" containerID="b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9" Feb 01 07:51:14 crc kubenswrapper[4835]: E0201 07:51:14.902531 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:51:15 crc kubenswrapper[4835]: I0201 07:51:15.678025 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"6fdbb0ab768d00deff13ea9eb6be0e0c1db12da04c0cfc661beeecd91e511120"} Feb 01 07:51:15 crc kubenswrapper[4835]: I0201 07:51:15.679249 4835 scope.go:117] "RemoveContainer" containerID="6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9" Feb 01 07:51:15 crc kubenswrapper[4835]: I0201 07:51:15.679357 4835 scope.go:117] "RemoveContainer" containerID="ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb" Feb 01 07:51:15 crc kubenswrapper[4835]: I0201 07:51:15.679522 4835 scope.go:117] "RemoveContainer" containerID="ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c" Feb 01 07:51:15 crc kubenswrapper[4835]: I0201 07:51:15.679590 4835 scope.go:117] "RemoveContainer" containerID="b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9" Feb 01 07:51:15 crc kubenswrapper[4835]: E0201 07:51:15.679909 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:51:21 crc kubenswrapper[4835]: I0201 07:51:21.567431 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:51:21 crc kubenswrapper[4835]: I0201 07:51:21.567733 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:51:21 crc kubenswrapper[4835]: I0201 07:51:21.567837 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:51:21 crc kubenswrapper[4835]: E0201 07:51:21.568031 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:51:21 crc kubenswrapper[4835]: E0201 07:51:21.568584 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:51:22 crc kubenswrapper[4835]: I0201 07:51:22.568058 4835 scope.go:117] "RemoveContainer" containerID="fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" Feb 01 07:51:22 crc kubenswrapper[4835]: I0201 07:51:22.568548 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:51:22 crc kubenswrapper[4835]: E0201 07:51:22.568991 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:51:26 crc kubenswrapper[4835]: I0201 07:51:26.567148 4835 scope.go:117] "RemoveContainer" containerID="6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9" Feb 01 07:51:26 crc kubenswrapper[4835]: I0201 07:51:26.567735 4835 scope.go:117] "RemoveContainer" containerID="ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb" Feb 01 07:51:26 crc kubenswrapper[4835]: I0201 07:51:26.567989 4835 scope.go:117] "RemoveContainer" containerID="ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c" Feb 01 07:51:26 crc kubenswrapper[4835]: I0201 07:51:26.568121 4835 scope.go:117] "RemoveContainer" containerID="b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9" Feb 01 07:51:26 crc kubenswrapper[4835]: E0201 07:51:26.568769 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:51:33 crc kubenswrapper[4835]: I0201 07:51:33.567541 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:51:33 crc kubenswrapper[4835]: I0201 07:51:33.568196 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:51:33 crc kubenswrapper[4835]: E0201 07:51:33.568597 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:51:34 crc kubenswrapper[4835]: I0201 07:51:34.566701 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:51:34 crc kubenswrapper[4835]: E0201 07:51:34.567014 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:51:37 crc kubenswrapper[4835]: I0201 07:51:37.572389 4835 scope.go:117] "RemoveContainer" containerID="6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9" Feb 01 07:51:37 crc kubenswrapper[4835]: I0201 07:51:37.572784 4835 scope.go:117] "RemoveContainer" containerID="fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" Feb 01 07:51:37 crc kubenswrapper[4835]: I0201 07:51:37.572810 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:51:37 crc kubenswrapper[4835]: I0201 07:51:37.572862 4835 scope.go:117] "RemoveContainer" containerID="ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb" Feb 01 07:51:37 crc kubenswrapper[4835]: E0201 07:51:37.573018 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:51:37 crc kubenswrapper[4835]: I0201 07:51:37.573050 4835 scope.go:117] "RemoveContainer" containerID="ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c" Feb 01 07:51:37 crc kubenswrapper[4835]: I0201 07:51:37.573118 4835 scope.go:117] "RemoveContainer" containerID="b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9" Feb 01 07:51:37 crc kubenswrapper[4835]: E0201 07:51:37.573675 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:51:46 crc kubenswrapper[4835]: I0201 07:51:46.567971 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:51:46 crc kubenswrapper[4835]: I0201 07:51:46.568603 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:51:46 crc kubenswrapper[4835]: E0201 07:51:46.568964 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:51:48 crc kubenswrapper[4835]: I0201 07:51:48.567650 4835 scope.go:117] "RemoveContainer" containerID="fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" Feb 01 07:51:48 crc kubenswrapper[4835]: I0201 07:51:48.567977 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:51:48 crc kubenswrapper[4835]: E0201 07:51:48.776940 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:51:49 crc kubenswrapper[4835]: I0201 07:51:49.006183 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887"} Feb 01 07:51:49 crc kubenswrapper[4835]: I0201 07:51:49.006931 4835 scope.go:117] "RemoveContainer" containerID="fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" Feb 01 07:51:49 crc kubenswrapper[4835]: E0201 07:51:49.007272 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:51:49 crc kubenswrapper[4835]: I0201 07:51:49.007558 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:51:49 crc kubenswrapper[4835]: I0201 07:51:49.567940 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:51:49 crc kubenswrapper[4835]: E0201 07:51:49.568649 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:51:50 crc kubenswrapper[4835]: I0201 07:51:50.019602 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" exitCode=1 Feb 01 07:51:50 crc kubenswrapper[4835]: I0201 07:51:50.019662 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887"} Feb 01 07:51:50 crc kubenswrapper[4835]: I0201 07:51:50.020542 4835 scope.go:117] "RemoveContainer" containerID="a86c6ceea4229bfdb0cfe43e7eb712af72c89a967eba4526a0ffa729b7b26415" Feb 01 07:51:50 crc kubenswrapper[4835]: I0201 07:51:50.020790 4835 scope.go:117] "RemoveContainer" containerID="fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" Feb 01 07:51:50 crc kubenswrapper[4835]: I0201 07:51:50.020982 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:51:50 crc kubenswrapper[4835]: E0201 07:51:50.021768 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:51:51 crc kubenswrapper[4835]: I0201 07:51:51.036604 4835 scope.go:117] "RemoveContainer" containerID="fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" Feb 01 07:51:51 crc kubenswrapper[4835]: I0201 07:51:51.036660 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:51:51 crc kubenswrapper[4835]: E0201 07:51:51.037078 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:51:51 crc kubenswrapper[4835]: I0201 07:51:51.568063 4835 scope.go:117] "RemoveContainer" containerID="6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9" Feb 01 07:51:51 crc kubenswrapper[4835]: I0201 07:51:51.568222 4835 scope.go:117] "RemoveContainer" containerID="ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb" Feb 01 07:51:51 crc kubenswrapper[4835]: I0201 07:51:51.568459 4835 scope.go:117] "RemoveContainer" containerID="ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c" Feb 01 07:51:51 crc kubenswrapper[4835]: I0201 07:51:51.568583 4835 scope.go:117] "RemoveContainer" containerID="b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9" Feb 01 07:51:52 crc kubenswrapper[4835]: I0201 07:51:52.019471 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:51:52 crc kubenswrapper[4835]: I0201 07:51:52.066264 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63"} Feb 01 07:51:52 crc kubenswrapper[4835]: I0201 07:51:52.066314 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7"} Feb 01 07:51:52 crc kubenswrapper[4835]: I0201 07:51:52.067768 4835 scope.go:117] "RemoveContainer" containerID="fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" Feb 01 07:51:52 crc kubenswrapper[4835]: I0201 07:51:52.067967 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:51:52 crc kubenswrapper[4835]: E0201 07:51:52.068571 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.090464 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" exitCode=1 Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.090539 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" exitCode=1 Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.090539 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63"} Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.090602 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7"} Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.090621 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150"} Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.090643 4835 scope.go:117] "RemoveContainer" containerID="ceabb3fe584961464b2c97738e98303d62f35a6a41c066ed190ec40a5d9dc5eb" Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.090561 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" exitCode=1 Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.090709 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" exitCode=1 Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.090750 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b"} Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.091782 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.091985 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.092248 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.092360 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:51:53 crc kubenswrapper[4835]: E0201 07:51:53.092959 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.155754 4835 scope.go:117] "RemoveContainer" containerID="6c44b70885c8463b03a15367c795fb3abec319c464011e7eb6f4df420d28c5e9" Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.205556 4835 scope.go:117] "RemoveContainer" containerID="b18edf5426d800301b7f1b334f5a4400c2754fbf0afd74fec4fb662b19d43cd9" Feb 01 07:51:53 crc kubenswrapper[4835]: I0201 07:51:53.258750 4835 scope.go:117] "RemoveContainer" containerID="ee1895d7ea11d9b655913dc3923a2259fd023bf7fc24244c3e1543588312c97c" Feb 01 07:51:54 crc kubenswrapper[4835]: I0201 07:51:54.111074 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:51:54 crc kubenswrapper[4835]: I0201 07:51:54.111193 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:51:54 crc kubenswrapper[4835]: I0201 07:51:54.111372 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:51:54 crc kubenswrapper[4835]: I0201 07:51:54.111467 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:51:54 crc kubenswrapper[4835]: E0201 07:51:54.111921 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:52:00 crc kubenswrapper[4835]: I0201 07:52:00.567338 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:52:00 crc kubenswrapper[4835]: E0201 07:52:00.568556 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:52:01 crc kubenswrapper[4835]: I0201 07:52:01.567708 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:52:01 crc kubenswrapper[4835]: I0201 07:52:01.567804 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:52:01 crc kubenswrapper[4835]: E0201 07:52:01.568294 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:52:06 crc kubenswrapper[4835]: I0201 07:52:06.567034 4835 scope.go:117] "RemoveContainer" containerID="fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" Feb 01 07:52:06 crc kubenswrapper[4835]: I0201 07:52:06.567452 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:52:06 crc kubenswrapper[4835]: I0201 07:52:06.567659 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:52:06 crc kubenswrapper[4835]: I0201 07:52:06.567776 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:52:06 crc kubenswrapper[4835]: E0201 07:52:06.567877 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:52:06 crc kubenswrapper[4835]: I0201 07:52:06.567958 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:52:06 crc kubenswrapper[4835]: I0201 07:52:06.568024 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:52:06 crc kubenswrapper[4835]: E0201 07:52:06.568583 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:52:14 crc kubenswrapper[4835]: I0201 07:52:14.567622 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:52:14 crc kubenswrapper[4835]: E0201 07:52:14.568639 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:52:16 crc kubenswrapper[4835]: I0201 07:52:16.567990 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:52:16 crc kubenswrapper[4835]: I0201 07:52:16.568053 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:52:16 crc kubenswrapper[4835]: E0201 07:52:16.568466 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:52:17 crc kubenswrapper[4835]: I0201 07:52:17.576397 4835 scope.go:117] "RemoveContainer" containerID="fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" Feb 01 07:52:17 crc kubenswrapper[4835]: I0201 07:52:17.576936 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:52:17 crc kubenswrapper[4835]: E0201 07:52:17.577523 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:52:18 crc kubenswrapper[4835]: I0201 07:52:18.568460 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:52:18 crc kubenswrapper[4835]: I0201 07:52:18.568947 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:52:18 crc kubenswrapper[4835]: I0201 07:52:18.569265 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:52:18 crc kubenswrapper[4835]: I0201 07:52:18.569538 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:52:18 crc kubenswrapper[4835]: E0201 07:52:18.570459 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:52:29 crc kubenswrapper[4835]: I0201 07:52:29.568132 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:52:30 crc kubenswrapper[4835]: I0201 07:52:30.499005 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"d638555a7804d9b2393754d14295137aca5e115889b061826bbd0511ac275ab7"} Feb 01 07:52:30 crc kubenswrapper[4835]: I0201 07:52:30.508170 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="6fdbb0ab768d00deff13ea9eb6be0e0c1db12da04c0cfc661beeecd91e511120" exitCode=1 Feb 01 07:52:30 crc kubenswrapper[4835]: I0201 07:52:30.508216 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"6fdbb0ab768d00deff13ea9eb6be0e0c1db12da04c0cfc661beeecd91e511120"} Feb 01 07:52:30 crc kubenswrapper[4835]: I0201 07:52:30.508273 4835 scope.go:117] "RemoveContainer" containerID="f37851d985a30497d9ff14d46c11d28293ba0304df3383819707502eddde0548" Feb 01 07:52:30 crc kubenswrapper[4835]: I0201 07:52:30.509943 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:52:30 crc kubenswrapper[4835]: I0201 07:52:30.510084 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:52:30 crc kubenswrapper[4835]: I0201 07:52:30.510134 4835 scope.go:117] "RemoveContainer" containerID="6fdbb0ab768d00deff13ea9eb6be0e0c1db12da04c0cfc661beeecd91e511120" Feb 01 07:52:30 crc kubenswrapper[4835]: I0201 07:52:30.510286 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:52:30 crc kubenswrapper[4835]: I0201 07:52:30.510357 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:52:30 crc kubenswrapper[4835]: E0201 07:52:30.511027 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:52:30 crc kubenswrapper[4835]: I0201 07:52:30.582670 4835 scope.go:117] "RemoveContainer" containerID="fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" Feb 01 07:52:30 crc kubenswrapper[4835]: I0201 07:52:30.582699 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:52:30 crc kubenswrapper[4835]: E0201 07:52:30.761542 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:52:31 crc kubenswrapper[4835]: I0201 07:52:31.521392 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a"} Feb 01 07:52:31 crc kubenswrapper[4835]: I0201 07:52:31.522078 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:52:31 crc kubenswrapper[4835]: I0201 07:52:31.522577 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:52:31 crc kubenswrapper[4835]: E0201 07:52:31.523001 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:52:31 crc kubenswrapper[4835]: I0201 07:52:31.569118 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:52:31 crc kubenswrapper[4835]: I0201 07:52:31.569175 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:52:31 crc kubenswrapper[4835]: E0201 07:52:31.569646 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:52:32 crc kubenswrapper[4835]: I0201 07:52:32.542703 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:52:32 crc kubenswrapper[4835]: E0201 07:52:32.543106 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:52:35 crc kubenswrapper[4835]: I0201 07:52:35.023500 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:52:37 crc kubenswrapper[4835]: I0201 07:52:37.024383 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:52:40 crc kubenswrapper[4835]: I0201 07:52:40.021823 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:52:40 crc kubenswrapper[4835]: I0201 07:52:40.022197 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:52:41 crc kubenswrapper[4835]: I0201 07:52:41.568348 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:52:41 crc kubenswrapper[4835]: I0201 07:52:41.568612 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:52:41 crc kubenswrapper[4835]: I0201 07:52:41.568676 4835 scope.go:117] "RemoveContainer" containerID="6fdbb0ab768d00deff13ea9eb6be0e0c1db12da04c0cfc661beeecd91e511120" Feb 01 07:52:41 crc kubenswrapper[4835]: I0201 07:52:41.568837 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:52:41 crc kubenswrapper[4835]: I0201 07:52:41.568922 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:52:41 crc kubenswrapper[4835]: E0201 07:52:41.769124 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:52:42 crc kubenswrapper[4835]: I0201 07:52:42.666725 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"429fdfbd7d247a80e284089a4e87c0237e19cf63c27dfeeed6bbf34128245482"} Feb 01 07:52:42 crc kubenswrapper[4835]: I0201 07:52:42.670772 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:52:42 crc kubenswrapper[4835]: I0201 07:52:42.670915 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:52:42 crc kubenswrapper[4835]: I0201 07:52:42.671106 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:52:42 crc kubenswrapper[4835]: I0201 07:52:42.671175 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:52:42 crc kubenswrapper[4835]: E0201 07:52:42.671917 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:52:43 crc kubenswrapper[4835]: I0201 07:52:43.021602 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:52:43 crc kubenswrapper[4835]: I0201 07:52:43.021715 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:52:43 crc kubenswrapper[4835]: I0201 07:52:43.022671 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:52:43 crc kubenswrapper[4835]: I0201 07:52:43.022712 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:52:43 crc kubenswrapper[4835]: I0201 07:52:43.022760 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" gracePeriod=30 Feb 01 07:52:43 crc kubenswrapper[4835]: I0201 07:52:43.025265 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:52:43 crc kubenswrapper[4835]: E0201 07:52:43.150692 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:52:43 crc kubenswrapper[4835]: I0201 07:52:43.682496 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" exitCode=0 Feb 01 07:52:43 crc kubenswrapper[4835]: I0201 07:52:43.682615 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a"} Feb 01 07:52:43 crc kubenswrapper[4835]: I0201 07:52:43.683203 4835 scope.go:117] "RemoveContainer" containerID="fd71313b52b607b08d45c1044a3e43cf4a212c9d65982fa27fbac2ade3d5ed37" Feb 01 07:52:43 crc kubenswrapper[4835]: I0201 07:52:43.683998 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:52:43 crc kubenswrapper[4835]: I0201 07:52:43.684071 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:52:43 crc kubenswrapper[4835]: E0201 07:52:43.684610 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:52:45 crc kubenswrapper[4835]: I0201 07:52:45.567545 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:52:45 crc kubenswrapper[4835]: I0201 07:52:45.567583 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:52:45 crc kubenswrapper[4835]: E0201 07:52:45.568041 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:52:47 crc kubenswrapper[4835]: I0201 07:52:47.121208 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:52:47 crc kubenswrapper[4835]: E0201 07:52:47.121536 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:52:47 crc kubenswrapper[4835]: E0201 07:52:47.122205 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:54:49.122167659 +0000 UTC m=+1962.242604093 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:52:50 crc kubenswrapper[4835]: E0201 07:52:50.348250 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 07:52:50 crc kubenswrapper[4835]: I0201 07:52:50.747310 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:52:54 crc kubenswrapper[4835]: I0201 07:52:54.567249 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:52:54 crc kubenswrapper[4835]: I0201 07:52:54.567457 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:52:54 crc kubenswrapper[4835]: I0201 07:52:54.568068 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:52:54 crc kubenswrapper[4835]: I0201 07:52:54.568158 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:52:54 crc kubenswrapper[4835]: E0201 07:52:54.568755 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:52:55 crc kubenswrapper[4835]: I0201 07:52:55.567101 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:52:55 crc kubenswrapper[4835]: I0201 07:52:55.567164 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:52:55 crc kubenswrapper[4835]: E0201 07:52:55.569691 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:52:58 crc kubenswrapper[4835]: I0201 07:52:58.567164 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:52:58 crc kubenswrapper[4835]: I0201 07:52:58.567583 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:52:58 crc kubenswrapper[4835]: E0201 07:52:58.567973 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:53:06 crc kubenswrapper[4835]: I0201 07:53:06.566756 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:53:06 crc kubenswrapper[4835]: I0201 07:53:06.568535 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:53:06 crc kubenswrapper[4835]: E0201 07:53:06.569070 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:53:07 crc kubenswrapper[4835]: I0201 07:53:07.570860 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:53:07 crc kubenswrapper[4835]: I0201 07:53:07.570947 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:53:07 crc kubenswrapper[4835]: I0201 07:53:07.571034 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:53:07 crc kubenswrapper[4835]: I0201 07:53:07.571066 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:53:07 crc kubenswrapper[4835]: E0201 07:53:07.571367 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:53:13 crc kubenswrapper[4835]: I0201 07:53:13.567444 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:53:13 crc kubenswrapper[4835]: I0201 07:53:13.568069 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:53:13 crc kubenswrapper[4835]: E0201 07:53:13.568487 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:53:20 crc kubenswrapper[4835]: I0201 07:53:20.566975 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:53:20 crc kubenswrapper[4835]: I0201 07:53:20.567707 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:53:20 crc kubenswrapper[4835]: E0201 07:53:20.568122 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:53:21 crc kubenswrapper[4835]: I0201 07:53:21.568147 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:53:21 crc kubenswrapper[4835]: I0201 07:53:21.568317 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:53:21 crc kubenswrapper[4835]: I0201 07:53:21.568642 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:53:21 crc kubenswrapper[4835]: I0201 07:53:21.568742 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:53:21 crc kubenswrapper[4835]: E0201 07:53:21.569401 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:53:24 crc kubenswrapper[4835]: I0201 07:53:24.566838 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:53:24 crc kubenswrapper[4835]: I0201 07:53:24.567314 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:53:24 crc kubenswrapper[4835]: E0201 07:53:24.567902 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:53:31 crc kubenswrapper[4835]: I0201 07:53:31.567452 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:53:31 crc kubenswrapper[4835]: I0201 07:53:31.568071 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:53:31 crc kubenswrapper[4835]: E0201 07:53:31.568503 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:53:34 crc kubenswrapper[4835]: I0201 07:53:34.568110 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:53:34 crc kubenswrapper[4835]: I0201 07:53:34.568690 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:53:34 crc kubenswrapper[4835]: I0201 07:53:34.568906 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:53:34 crc kubenswrapper[4835]: I0201 07:53:34.568976 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:53:34 crc kubenswrapper[4835]: E0201 07:53:34.569568 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:53:38 crc kubenswrapper[4835]: I0201 07:53:38.567110 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:53:38 crc kubenswrapper[4835]: I0201 07:53:38.567582 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:53:38 crc kubenswrapper[4835]: E0201 07:53:38.568192 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:53:41 crc kubenswrapper[4835]: I0201 07:53:41.246972 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="7e996cce6d01e8d3083a03c89344fa5e2e5fa37ac118b8a6c148b0b9b7355967" exitCode=1 Feb 01 07:53:41 crc kubenswrapper[4835]: I0201 07:53:41.247048 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"7e996cce6d01e8d3083a03c89344fa5e2e5fa37ac118b8a6c148b0b9b7355967"} Feb 01 07:53:41 crc kubenswrapper[4835]: I0201 07:53:41.247127 4835 scope.go:117] "RemoveContainer" containerID="9760d7167271d692b8a511dedaf5143643873c09e285f761e1c84b1ed0a4fc66" Feb 01 07:53:41 crc kubenswrapper[4835]: I0201 07:53:41.248261 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:53:41 crc kubenswrapper[4835]: I0201 07:53:41.248373 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:53:41 crc kubenswrapper[4835]: I0201 07:53:41.248595 4835 scope.go:117] "RemoveContainer" containerID="7e996cce6d01e8d3083a03c89344fa5e2e5fa37ac118b8a6c148b0b9b7355967" Feb 01 07:53:41 crc kubenswrapper[4835]: I0201 07:53:41.248631 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:53:41 crc kubenswrapper[4835]: I0201 07:53:41.248695 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:53:41 crc kubenswrapper[4835]: E0201 07:53:41.249459 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:53:46 crc kubenswrapper[4835]: I0201 07:53:46.567155 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:53:46 crc kubenswrapper[4835]: I0201 07:53:46.568029 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:53:46 crc kubenswrapper[4835]: E0201 07:53:46.568525 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:53:51 crc kubenswrapper[4835]: I0201 07:53:51.567946 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:53:51 crc kubenswrapper[4835]: I0201 07:53:51.568540 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:53:51 crc kubenswrapper[4835]: E0201 07:53:51.568958 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:53:53 crc kubenswrapper[4835]: I0201 07:53:53.567235 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:53:53 crc kubenswrapper[4835]: I0201 07:53:53.567409 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:53:53 crc kubenswrapper[4835]: I0201 07:53:53.567637 4835 scope.go:117] "RemoveContainer" containerID="7e996cce6d01e8d3083a03c89344fa5e2e5fa37ac118b8a6c148b0b9b7355967" Feb 01 07:53:53 crc kubenswrapper[4835]: I0201 07:53:53.567651 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:53:53 crc kubenswrapper[4835]: I0201 07:53:53.567721 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:53:53 crc kubenswrapper[4835]: E0201 07:53:53.568476 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:53:59 crc kubenswrapper[4835]: I0201 07:53:59.567177 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:53:59 crc kubenswrapper[4835]: I0201 07:53:59.567643 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:53:59 crc kubenswrapper[4835]: E0201 07:53:59.568301 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:54:03 crc kubenswrapper[4835]: I0201 07:54:03.567740 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:54:03 crc kubenswrapper[4835]: I0201 07:54:03.568331 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:54:03 crc kubenswrapper[4835]: E0201 07:54:03.568629 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:54:06 crc kubenswrapper[4835]: I0201 07:54:06.493221 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="429fdfbd7d247a80e284089a4e87c0237e19cf63c27dfeeed6bbf34128245482" exitCode=1 Feb 01 07:54:06 crc kubenswrapper[4835]: I0201 07:54:06.493279 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"429fdfbd7d247a80e284089a4e87c0237e19cf63c27dfeeed6bbf34128245482"} Feb 01 07:54:06 crc kubenswrapper[4835]: I0201 07:54:06.493687 4835 scope.go:117] "RemoveContainer" containerID="6fdbb0ab768d00deff13ea9eb6be0e0c1db12da04c0cfc661beeecd91e511120" Feb 01 07:54:06 crc kubenswrapper[4835]: I0201 07:54:06.494519 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:54:06 crc kubenswrapper[4835]: I0201 07:54:06.494571 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:54:06 crc kubenswrapper[4835]: I0201 07:54:06.494591 4835 scope.go:117] "RemoveContainer" containerID="429fdfbd7d247a80e284089a4e87c0237e19cf63c27dfeeed6bbf34128245482" Feb 01 07:54:06 crc kubenswrapper[4835]: I0201 07:54:06.494654 4835 scope.go:117] "RemoveContainer" containerID="7e996cce6d01e8d3083a03c89344fa5e2e5fa37ac118b8a6c148b0b9b7355967" Feb 01 07:54:06 crc kubenswrapper[4835]: I0201 07:54:06.494661 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:54:06 crc kubenswrapper[4835]: I0201 07:54:06.494693 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:54:06 crc kubenswrapper[4835]: E0201 07:54:06.676394 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:54:07 crc kubenswrapper[4835]: I0201 07:54:07.515771 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"1b8268b3bec83a014746e8dc06250c05dcb7e750534da6c50d5c417e7dc55857"} Feb 01 07:54:07 crc kubenswrapper[4835]: I0201 07:54:07.516933 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:54:07 crc kubenswrapper[4835]: I0201 07:54:07.517005 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:54:07 crc kubenswrapper[4835]: I0201 07:54:07.517032 4835 scope.go:117] "RemoveContainer" containerID="429fdfbd7d247a80e284089a4e87c0237e19cf63c27dfeeed6bbf34128245482" Feb 01 07:54:07 crc kubenswrapper[4835]: I0201 07:54:07.517110 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:54:07 crc kubenswrapper[4835]: I0201 07:54:07.517151 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:54:07 crc kubenswrapper[4835]: E0201 07:54:07.517507 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:54:12 crc kubenswrapper[4835]: I0201 07:54:12.568168 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:54:12 crc kubenswrapper[4835]: I0201 07:54:12.571250 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:54:12 crc kubenswrapper[4835]: E0201 07:54:12.572040 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:54:15 crc kubenswrapper[4835]: I0201 07:54:15.603319 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="1b8268b3bec83a014746e8dc06250c05dcb7e750534da6c50d5c417e7dc55857" exitCode=1 Feb 01 07:54:15 crc kubenswrapper[4835]: I0201 07:54:15.603453 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"1b8268b3bec83a014746e8dc06250c05dcb7e750534da6c50d5c417e7dc55857"} Feb 01 07:54:15 crc kubenswrapper[4835]: I0201 07:54:15.603796 4835 scope.go:117] "RemoveContainer" containerID="7e996cce6d01e8d3083a03c89344fa5e2e5fa37ac118b8a6c148b0b9b7355967" Feb 01 07:54:15 crc kubenswrapper[4835]: I0201 07:54:15.605020 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:54:15 crc kubenswrapper[4835]: I0201 07:54:15.605174 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:54:15 crc kubenswrapper[4835]: I0201 07:54:15.605235 4835 scope.go:117] "RemoveContainer" containerID="429fdfbd7d247a80e284089a4e87c0237e19cf63c27dfeeed6bbf34128245482" Feb 01 07:54:15 crc kubenswrapper[4835]: I0201 07:54:15.605359 4835 scope.go:117] "RemoveContainer" containerID="1b8268b3bec83a014746e8dc06250c05dcb7e750534da6c50d5c417e7dc55857" Feb 01 07:54:15 crc kubenswrapper[4835]: I0201 07:54:15.605392 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:54:15 crc kubenswrapper[4835]: I0201 07:54:15.605533 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:54:15 crc kubenswrapper[4835]: E0201 07:54:15.606329 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:54:17 crc kubenswrapper[4835]: I0201 07:54:17.575557 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:54:17 crc kubenswrapper[4835]: I0201 07:54:17.577568 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:54:17 crc kubenswrapper[4835]: E0201 07:54:17.578166 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:54:27 crc kubenswrapper[4835]: I0201 07:54:27.575076 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:54:27 crc kubenswrapper[4835]: I0201 07:54:27.575887 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:54:27 crc kubenswrapper[4835]: E0201 07:54:27.576276 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:54:29 crc kubenswrapper[4835]: I0201 07:54:29.567593 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:54:29 crc kubenswrapper[4835]: I0201 07:54:29.568999 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:54:29 crc kubenswrapper[4835]: I0201 07:54:29.569100 4835 scope.go:117] "RemoveContainer" containerID="429fdfbd7d247a80e284089a4e87c0237e19cf63c27dfeeed6bbf34128245482" Feb 01 07:54:29 crc kubenswrapper[4835]: I0201 07:54:29.569194 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:54:29 crc kubenswrapper[4835]: I0201 07:54:29.569235 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:54:29 crc kubenswrapper[4835]: I0201 07:54:29.569973 4835 scope.go:117] "RemoveContainer" containerID="1b8268b3bec83a014746e8dc06250c05dcb7e750534da6c50d5c417e7dc55857" Feb 01 07:54:29 crc kubenswrapper[4835]: I0201 07:54:29.570020 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:54:29 crc kubenswrapper[4835]: I0201 07:54:29.570115 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:54:29 crc kubenswrapper[4835]: E0201 07:54:29.804963 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:54:29 crc kubenswrapper[4835]: E0201 07:54:29.805201 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:54:30 crc kubenswrapper[4835]: I0201 07:54:30.761715 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" exitCode=1 Feb 01 07:54:30 crc kubenswrapper[4835]: I0201 07:54:30.761801 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f"} Feb 01 07:54:30 crc kubenswrapper[4835]: I0201 07:54:30.761839 4835 scope.go:117] "RemoveContainer" containerID="fd9d216b66c3647739f73e0836c125c81318f67e0a8c9bde84e63bd35e00ac44" Feb 01 07:54:30 crc kubenswrapper[4835]: I0201 07:54:30.762623 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:54:30 crc kubenswrapper[4835]: I0201 07:54:30.762657 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:54:30 crc kubenswrapper[4835]: E0201 07:54:30.763111 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:54:30 crc kubenswrapper[4835]: I0201 07:54:30.775087 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"4ba11c9f6be15acd5d3543ccf13bbfa830ab68fbb85b3cdf2888e5b0e15b8758"} Feb 01 07:54:30 crc kubenswrapper[4835]: I0201 07:54:30.776338 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:54:30 crc kubenswrapper[4835]: I0201 07:54:30.776501 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:54:30 crc kubenswrapper[4835]: I0201 07:54:30.776673 4835 scope.go:117] "RemoveContainer" containerID="1b8268b3bec83a014746e8dc06250c05dcb7e750534da6c50d5c417e7dc55857" Feb 01 07:54:30 crc kubenswrapper[4835]: I0201 07:54:30.776698 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:54:30 crc kubenswrapper[4835]: I0201 07:54:30.776763 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:54:30 crc kubenswrapper[4835]: E0201 07:54:30.777352 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:54:32 crc kubenswrapper[4835]: I0201 07:54:32.535520 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:54:32 crc kubenswrapper[4835]: I0201 07:54:32.537283 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:54:32 crc kubenswrapper[4835]: I0201 07:54:32.537351 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:54:32 crc kubenswrapper[4835]: E0201 07:54:32.537987 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:54:33 crc kubenswrapper[4835]: I0201 07:54:33.535270 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:54:33 crc kubenswrapper[4835]: I0201 07:54:33.536096 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:54:33 crc kubenswrapper[4835]: I0201 07:54:33.536126 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:54:33 crc kubenswrapper[4835]: E0201 07:54:33.536577 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:54:41 crc kubenswrapper[4835]: I0201 07:54:41.567495 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:54:41 crc kubenswrapper[4835]: I0201 07:54:41.567963 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:54:41 crc kubenswrapper[4835]: E0201 07:54:41.748619 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:54:41 crc kubenswrapper[4835]: I0201 07:54:41.897096 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11"} Feb 01 07:54:41 crc kubenswrapper[4835]: I0201 07:54:41.897820 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:54:41 crc kubenswrapper[4835]: I0201 07:54:41.898198 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:54:41 crc kubenswrapper[4835]: E0201 07:54:41.898715 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:54:42 crc kubenswrapper[4835]: I0201 07:54:42.568087 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:54:42 crc kubenswrapper[4835]: I0201 07:54:42.568252 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:54:42 crc kubenswrapper[4835]: I0201 07:54:42.568350 4835 scope.go:117] "RemoveContainer" containerID="1b8268b3bec83a014746e8dc06250c05dcb7e750534da6c50d5c417e7dc55857" Feb 01 07:54:42 crc kubenswrapper[4835]: I0201 07:54:42.568360 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:54:42 crc kubenswrapper[4835]: I0201 07:54:42.568423 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:54:42 crc kubenswrapper[4835]: I0201 07:54:42.914013 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89"} Feb 01 07:54:42 crc kubenswrapper[4835]: I0201 07:54:42.916487 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" exitCode=1 Feb 01 07:54:42 crc kubenswrapper[4835]: I0201 07:54:42.916589 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11"} Feb 01 07:54:42 crc kubenswrapper[4835]: I0201 07:54:42.916675 4835 scope.go:117] "RemoveContainer" containerID="06922a0b8ce79c7eb71093c515176475f288044de65d87642d47f586da9f2887" Feb 01 07:54:42 crc kubenswrapper[4835]: I0201 07:54:42.917272 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:54:42 crc kubenswrapper[4835]: I0201 07:54:42.917368 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:54:42 crc kubenswrapper[4835]: E0201 07:54:42.917658 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.018753 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:54:43 crc kubenswrapper[4835]: E0201 07:54:43.278162 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.928126 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.928876 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:54:43 crc kubenswrapper[4835]: E0201 07:54:43.929239 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.938452 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" exitCode=1 Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.938771 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" exitCode=1 Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.938921 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" exitCode=1 Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.939037 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" exitCode=1 Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.938528 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89"} Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.939330 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536"} Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.939497 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150"} Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.939616 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794"} Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.939420 4835 scope.go:117] "RemoveContainer" containerID="c824ff586a60c18988b768030416a59f174de4bb936a995b9af96cc4479421e7" Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.939836 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.940014 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.940216 4835 scope.go:117] "RemoveContainer" containerID="1b8268b3bec83a014746e8dc06250c05dcb7e750534da6c50d5c417e7dc55857" Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.940304 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.940485 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:54:43 crc kubenswrapper[4835]: E0201 07:54:43.941069 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:54:43 crc kubenswrapper[4835]: I0201 07:54:43.990942 4835 scope.go:117] "RemoveContainer" containerID="d0c892cf1d23f0b0b0fa51809ca962ef9842e6c3557a5a82c7b7f081e17a3150" Feb 01 07:54:44 crc kubenswrapper[4835]: I0201 07:54:44.031850 4835 scope.go:117] "RemoveContainer" containerID="d191d3adf8759e60cad2e44fea6598777896cc2c47cd8f565d2b730df700370b" Feb 01 07:54:44 crc kubenswrapper[4835]: I0201 07:54:44.074828 4835 scope.go:117] "RemoveContainer" containerID="99b316b6bdc264678ded3bf2f41707f3eda4647d44027e771fc09b484f0cac63" Feb 01 07:54:44 crc kubenswrapper[4835]: I0201 07:54:44.961212 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:54:44 crc kubenswrapper[4835]: I0201 07:54:44.961678 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:54:44 crc kubenswrapper[4835]: I0201 07:54:44.961722 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:54:44 crc kubenswrapper[4835]: I0201 07:54:44.961874 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:54:44 crc kubenswrapper[4835]: I0201 07:54:44.962027 4835 scope.go:117] "RemoveContainer" containerID="1b8268b3bec83a014746e8dc06250c05dcb7e750534da6c50d5c417e7dc55857" Feb 01 07:54:44 crc kubenswrapper[4835]: I0201 07:54:44.962042 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:54:44 crc kubenswrapper[4835]: I0201 07:54:44.962128 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:54:44 crc kubenswrapper[4835]: E0201 07:54:44.962774 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:54:44 crc kubenswrapper[4835]: E0201 07:54:44.963208 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:54:45 crc kubenswrapper[4835]: I0201 07:54:45.567566 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:54:45 crc kubenswrapper[4835]: I0201 07:54:45.567609 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:54:45 crc kubenswrapper[4835]: E0201 07:54:45.568001 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:54:49 crc kubenswrapper[4835]: I0201 07:54:49.166345 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:54:49 crc kubenswrapper[4835]: E0201 07:54:49.166574 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:54:49 crc kubenswrapper[4835]: E0201 07:54:49.166735 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:56:51.16669729 +0000 UTC m=+2084.287133764 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:54:53 crc kubenswrapper[4835]: E0201 07:54:53.748666 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 07:54:54 crc kubenswrapper[4835]: I0201 07:54:54.045066 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:54:55 crc kubenswrapper[4835]: I0201 07:54:55.191744 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:54:55 crc kubenswrapper[4835]: I0201 07:54:55.191828 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:54:56 crc kubenswrapper[4835]: I0201 07:54:56.567803 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:54:56 crc kubenswrapper[4835]: I0201 07:54:56.567855 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:54:56 crc kubenswrapper[4835]: E0201 07:54:56.568263 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:54:57 crc kubenswrapper[4835]: I0201 07:54:57.576499 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:54:57 crc kubenswrapper[4835]: I0201 07:54:57.576636 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:54:57 crc kubenswrapper[4835]: I0201 07:54:57.576794 4835 scope.go:117] "RemoveContainer" containerID="1b8268b3bec83a014746e8dc06250c05dcb7e750534da6c50d5c417e7dc55857" Feb 01 07:54:57 crc kubenswrapper[4835]: I0201 07:54:57.576809 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:54:57 crc kubenswrapper[4835]: I0201 07:54:57.576878 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:54:57 crc kubenswrapper[4835]: E0201 07:54:57.793876 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:54:58 crc kubenswrapper[4835]: I0201 07:54:58.095794 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"9299bf2d1843f2bf2326c5cd40b5b3e3ca4b314793c9ab4ac3d7140160844fa0"} Feb 01 07:54:58 crc kubenswrapper[4835]: I0201 07:54:58.096885 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:54:58 crc kubenswrapper[4835]: I0201 07:54:58.097025 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:54:58 crc kubenswrapper[4835]: I0201 07:54:58.097224 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:54:58 crc kubenswrapper[4835]: I0201 07:54:58.097291 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:54:58 crc kubenswrapper[4835]: E0201 07:54:58.097923 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:55:00 crc kubenswrapper[4835]: I0201 07:55:00.566992 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:55:00 crc kubenswrapper[4835]: I0201 07:55:00.567048 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:55:00 crc kubenswrapper[4835]: E0201 07:55:00.567323 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:55:08 crc kubenswrapper[4835]: I0201 07:55:08.567076 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:55:08 crc kubenswrapper[4835]: I0201 07:55:08.567610 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:55:08 crc kubenswrapper[4835]: E0201 07:55:08.568058 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:55:09 crc kubenswrapper[4835]: I0201 07:55:09.569134 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:55:09 crc kubenswrapper[4835]: I0201 07:55:09.569706 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:55:09 crc kubenswrapper[4835]: I0201 07:55:09.569888 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:55:09 crc kubenswrapper[4835]: I0201 07:55:09.569953 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:55:09 crc kubenswrapper[4835]: E0201 07:55:09.570477 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:55:12 crc kubenswrapper[4835]: I0201 07:55:12.567089 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:55:12 crc kubenswrapper[4835]: I0201 07:55:12.567133 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:55:12 crc kubenswrapper[4835]: E0201 07:55:12.567510 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:55:22 crc kubenswrapper[4835]: I0201 07:55:22.566394 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:55:22 crc kubenswrapper[4835]: I0201 07:55:22.567605 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:55:22 crc kubenswrapper[4835]: I0201 07:55:22.567685 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:55:22 crc kubenswrapper[4835]: I0201 07:55:22.567747 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:55:22 crc kubenswrapper[4835]: I0201 07:55:22.567828 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:55:22 crc kubenswrapper[4835]: I0201 07:55:22.567862 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:55:22 crc kubenswrapper[4835]: E0201 07:55:22.568073 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:55:22 crc kubenswrapper[4835]: E0201 07:55:22.568102 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:55:25 crc kubenswrapper[4835]: I0201 07:55:25.192916 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:55:25 crc kubenswrapper[4835]: I0201 07:55:25.193759 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:55:26 crc kubenswrapper[4835]: I0201 07:55:26.567016 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:55:26 crc kubenswrapper[4835]: I0201 07:55:26.567058 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:55:26 crc kubenswrapper[4835]: E0201 07:55:26.567255 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:55:35 crc kubenswrapper[4835]: I0201 07:55:35.567599 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:55:35 crc kubenswrapper[4835]: I0201 07:55:35.568297 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:55:35 crc kubenswrapper[4835]: I0201 07:55:35.568451 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:55:35 crc kubenswrapper[4835]: I0201 07:55:35.568509 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:55:35 crc kubenswrapper[4835]: E0201 07:55:35.568861 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:55:37 crc kubenswrapper[4835]: I0201 07:55:37.575678 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:55:37 crc kubenswrapper[4835]: I0201 07:55:37.576129 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:55:37 crc kubenswrapper[4835]: I0201 07:55:37.576180 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:55:37 crc kubenswrapper[4835]: I0201 07:55:37.576222 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:55:37 crc kubenswrapper[4835]: E0201 07:55:37.576684 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:55:37 crc kubenswrapper[4835]: E0201 07:55:37.829252 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:55:38 crc kubenswrapper[4835]: I0201 07:55:38.464597 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553"} Feb 01 07:55:38 crc kubenswrapper[4835]: I0201 07:55:38.464983 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:55:38 crc kubenswrapper[4835]: I0201 07:55:38.466219 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:55:38 crc kubenswrapper[4835]: E0201 07:55:38.466705 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:55:39 crc kubenswrapper[4835]: I0201 07:55:39.483852 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:55:39 crc kubenswrapper[4835]: E0201 07:55:39.485159 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:55:43 crc kubenswrapper[4835]: I0201 07:55:43.025387 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:55:45 crc kubenswrapper[4835]: I0201 07:55:45.021595 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:55:46 crc kubenswrapper[4835]: I0201 07:55:46.021818 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.023066 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.023209 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.024351 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.024387 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.024459 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" gracePeriod=30 Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.027992 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:55:49 crc kubenswrapper[4835]: E0201 07:55:49.152379 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.568274 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.568439 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.568624 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.568691 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:55:49 crc kubenswrapper[4835]: E0201 07:55:49.569269 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.572145 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" exitCode=0 Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.584511 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553"} Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.584596 4835 scope.go:117] "RemoveContainer" containerID="95d674e5f7a79ab0193c82933c579e4af4469b45d92be4f1941a2b874a91cd0a" Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.585962 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:55:49 crc kubenswrapper[4835]: I0201 07:55:49.586022 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:55:49 crc kubenswrapper[4835]: E0201 07:55:49.586638 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:55:51 crc kubenswrapper[4835]: I0201 07:55:51.567548 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:55:51 crc kubenswrapper[4835]: I0201 07:55:51.567974 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:55:51 crc kubenswrapper[4835]: E0201 07:55:51.568542 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:55:55 crc kubenswrapper[4835]: I0201 07:55:55.192372 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:55:55 crc kubenswrapper[4835]: I0201 07:55:55.192556 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:55:55 crc kubenswrapper[4835]: I0201 07:55:55.192636 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:55:55 crc kubenswrapper[4835]: I0201 07:55:55.193745 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d638555a7804d9b2393754d14295137aca5e115889b061826bbd0511ac275ab7"} pod="openshift-machine-config-operator/machine-config-daemon-wdt78" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 07:55:55 crc kubenswrapper[4835]: I0201 07:55:55.193887 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" containerID="cri-o://d638555a7804d9b2393754d14295137aca5e115889b061826bbd0511ac275ab7" gracePeriod=600 Feb 01 07:55:55 crc kubenswrapper[4835]: I0201 07:55:55.633531 4835 generic.go:334] "Generic (PLEG): container finished" podID="303c450e-4b2d-4908-84e6-df8b444ed640" containerID="d638555a7804d9b2393754d14295137aca5e115889b061826bbd0511ac275ab7" exitCode=0 Feb 01 07:55:55 crc kubenswrapper[4835]: I0201 07:55:55.633586 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerDied","Data":"d638555a7804d9b2393754d14295137aca5e115889b061826bbd0511ac275ab7"} Feb 01 07:55:55 crc kubenswrapper[4835]: I0201 07:55:55.633617 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8"} Feb 01 07:55:55 crc kubenswrapper[4835]: I0201 07:55:55.633638 4835 scope.go:117] "RemoveContainer" containerID="1cc9b4ca253f3e5b0061f2ee3281a3442ad7613c7a198001df1e889de8e3202e" Feb 01 07:55:56 crc kubenswrapper[4835]: I0201 07:55:56.658262 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="4ba11c9f6be15acd5d3543ccf13bbfa830ab68fbb85b3cdf2888e5b0e15b8758" exitCode=1 Feb 01 07:55:56 crc kubenswrapper[4835]: I0201 07:55:56.658340 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"4ba11c9f6be15acd5d3543ccf13bbfa830ab68fbb85b3cdf2888e5b0e15b8758"} Feb 01 07:55:56 crc kubenswrapper[4835]: I0201 07:55:56.658733 4835 scope.go:117] "RemoveContainer" containerID="429fdfbd7d247a80e284089a4e87c0237e19cf63c27dfeeed6bbf34128245482" Feb 01 07:55:56 crc kubenswrapper[4835]: I0201 07:55:56.659733 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:55:56 crc kubenswrapper[4835]: I0201 07:55:56.659851 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:55:56 crc kubenswrapper[4835]: I0201 07:55:56.659909 4835 scope.go:117] "RemoveContainer" containerID="4ba11c9f6be15acd5d3543ccf13bbfa830ab68fbb85b3cdf2888e5b0e15b8758" Feb 01 07:55:56 crc kubenswrapper[4835]: I0201 07:55:56.660053 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:55:56 crc kubenswrapper[4835]: I0201 07:55:56.660122 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:55:56 crc kubenswrapper[4835]: E0201 07:55:56.660688 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:56:02 crc kubenswrapper[4835]: I0201 07:56:02.566957 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:56:02 crc kubenswrapper[4835]: I0201 07:56:02.567650 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:56:02 crc kubenswrapper[4835]: E0201 07:56:02.568032 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:56:04 crc kubenswrapper[4835]: I0201 07:56:04.567917 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:56:04 crc kubenswrapper[4835]: I0201 07:56:04.568274 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:56:04 crc kubenswrapper[4835]: E0201 07:56:04.568544 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:56:09 crc kubenswrapper[4835]: I0201 07:56:09.567631 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:56:09 crc kubenswrapper[4835]: I0201 07:56:09.568514 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:56:09 crc kubenswrapper[4835]: I0201 07:56:09.568570 4835 scope.go:117] "RemoveContainer" containerID="4ba11c9f6be15acd5d3543ccf13bbfa830ab68fbb85b3cdf2888e5b0e15b8758" Feb 01 07:56:09 crc kubenswrapper[4835]: I0201 07:56:09.568719 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:56:09 crc kubenswrapper[4835]: I0201 07:56:09.568807 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:56:09 crc kubenswrapper[4835]: E0201 07:56:09.569511 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:56:15 crc kubenswrapper[4835]: I0201 07:56:15.566813 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:56:15 crc kubenswrapper[4835]: I0201 07:56:15.567612 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:56:15 crc kubenswrapper[4835]: E0201 07:56:15.568037 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:56:19 crc kubenswrapper[4835]: I0201 07:56:19.567200 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:56:19 crc kubenswrapper[4835]: I0201 07:56:19.567569 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:56:19 crc kubenswrapper[4835]: E0201 07:56:19.786956 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:56:19 crc kubenswrapper[4835]: I0201 07:56:19.913913 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"88b3ba4b52ecb819474ab7399ac2bb548a98b9b172fad1ad56ac2a0e2a8457e7"} Feb 01 07:56:19 crc kubenswrapper[4835]: I0201 07:56:19.914156 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:56:19 crc kubenswrapper[4835]: I0201 07:56:19.914677 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:56:19 crc kubenswrapper[4835]: E0201 07:56:19.915078 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:56:20 crc kubenswrapper[4835]: I0201 07:56:20.566754 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:56:20 crc kubenswrapper[4835]: I0201 07:56:20.566827 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:56:20 crc kubenswrapper[4835]: I0201 07:56:20.566856 4835 scope.go:117] "RemoveContainer" containerID="4ba11c9f6be15acd5d3543ccf13bbfa830ab68fbb85b3cdf2888e5b0e15b8758" Feb 01 07:56:20 crc kubenswrapper[4835]: I0201 07:56:20.566920 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:56:20 crc kubenswrapper[4835]: I0201 07:56:20.566959 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:56:20 crc kubenswrapper[4835]: E0201 07:56:20.567298 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:56:20 crc kubenswrapper[4835]: I0201 07:56:20.923924 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:56:20 crc kubenswrapper[4835]: E0201 07:56:20.924611 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:56:24 crc kubenswrapper[4835]: I0201 07:56:24.543307 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:56:27 crc kubenswrapper[4835]: I0201 07:56:27.537806 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:56:27 crc kubenswrapper[4835]: I0201 07:56:27.537838 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:56:27 crc kubenswrapper[4835]: I0201 07:56:27.573944 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:56:27 crc kubenswrapper[4835]: I0201 07:56:27.574293 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:56:27 crc kubenswrapper[4835]: E0201 07:56:27.574775 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:56:30 crc kubenswrapper[4835]: I0201 07:56:30.541717 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:56:30 crc kubenswrapper[4835]: I0201 07:56:30.542136 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:56:30 crc kubenswrapper[4835]: I0201 07:56:30.542874 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"88b3ba4b52ecb819474ab7399ac2bb548a98b9b172fad1ad56ac2a0e2a8457e7"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:56:30 crc kubenswrapper[4835]: I0201 07:56:30.542899 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:56:30 crc kubenswrapper[4835]: I0201 07:56:30.542932 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://88b3ba4b52ecb819474ab7399ac2bb548a98b9b172fad1ad56ac2a0e2a8457e7" gracePeriod=30 Feb 01 07:56:30 crc kubenswrapper[4835]: I0201 07:56:30.550451 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:56:30 crc kubenswrapper[4835]: E0201 07:56:30.852791 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:56:31 crc kubenswrapper[4835]: I0201 07:56:31.017909 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="88b3ba4b52ecb819474ab7399ac2bb548a98b9b172fad1ad56ac2a0e2a8457e7" exitCode=0 Feb 01 07:56:31 crc kubenswrapper[4835]: I0201 07:56:31.017947 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"88b3ba4b52ecb819474ab7399ac2bb548a98b9b172fad1ad56ac2a0e2a8457e7"} Feb 01 07:56:31 crc kubenswrapper[4835]: I0201 07:56:31.017971 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c"} Feb 01 07:56:31 crc kubenswrapper[4835]: I0201 07:56:31.017986 4835 scope.go:117] "RemoveContainer" containerID="0172ec86d5828183e71ae40e4b9a59b0a2fe1de1c74e7711d8111a19aa0eb652" Feb 01 07:56:31 crc kubenswrapper[4835]: I0201 07:56:31.018590 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:56:31 crc kubenswrapper[4835]: E0201 07:56:31.018750 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:56:31 crc kubenswrapper[4835]: I0201 07:56:31.018875 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:56:32 crc kubenswrapper[4835]: I0201 07:56:32.029995 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:56:32 crc kubenswrapper[4835]: E0201 07:56:32.030608 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:56:32 crc kubenswrapper[4835]: I0201 07:56:32.566949 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:56:32 crc kubenswrapper[4835]: I0201 07:56:32.567028 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:56:32 crc kubenswrapper[4835]: I0201 07:56:32.567057 4835 scope.go:117] "RemoveContainer" containerID="4ba11c9f6be15acd5d3543ccf13bbfa830ab68fbb85b3cdf2888e5b0e15b8758" Feb 01 07:56:32 crc kubenswrapper[4835]: I0201 07:56:32.567129 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:56:32 crc kubenswrapper[4835]: I0201 07:56:32.567187 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:56:32 crc kubenswrapper[4835]: E0201 07:56:32.567518 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:56:36 crc kubenswrapper[4835]: I0201 07:56:36.068713 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="9299bf2d1843f2bf2326c5cd40b5b3e3ca4b314793c9ab4ac3d7140160844fa0" exitCode=1 Feb 01 07:56:36 crc kubenswrapper[4835]: I0201 07:56:36.068815 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"9299bf2d1843f2bf2326c5cd40b5b3e3ca4b314793c9ab4ac3d7140160844fa0"} Feb 01 07:56:36 crc kubenswrapper[4835]: I0201 07:56:36.069252 4835 scope.go:117] "RemoveContainer" containerID="1b8268b3bec83a014746e8dc06250c05dcb7e750534da6c50d5c417e7dc55857" Feb 01 07:56:36 crc kubenswrapper[4835]: I0201 07:56:36.070330 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:56:36 crc kubenswrapper[4835]: I0201 07:56:36.070498 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:56:36 crc kubenswrapper[4835]: I0201 07:56:36.070545 4835 scope.go:117] "RemoveContainer" containerID="4ba11c9f6be15acd5d3543ccf13bbfa830ab68fbb85b3cdf2888e5b0e15b8758" Feb 01 07:56:36 crc kubenswrapper[4835]: I0201 07:56:36.070643 4835 scope.go:117] "RemoveContainer" containerID="9299bf2d1843f2bf2326c5cd40b5b3e3ca4b314793c9ab4ac3d7140160844fa0" Feb 01 07:56:36 crc kubenswrapper[4835]: I0201 07:56:36.070676 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:56:36 crc kubenswrapper[4835]: I0201 07:56:36.070744 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:56:36 crc kubenswrapper[4835]: E0201 07:56:36.071407 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:56:36 crc kubenswrapper[4835]: I0201 07:56:36.538074 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:56:37 crc kubenswrapper[4835]: I0201 07:56:37.538380 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:56:38 crc kubenswrapper[4835]: I0201 07:56:38.566519 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:56:38 crc kubenswrapper[4835]: I0201 07:56:38.566895 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:56:38 crc kubenswrapper[4835]: E0201 07:56:38.567245 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:56:39 crc kubenswrapper[4835]: I0201 07:56:39.542029 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:56:42 crc kubenswrapper[4835]: I0201 07:56:42.537679 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:56:42 crc kubenswrapper[4835]: I0201 07:56:42.537729 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:56:42 crc kubenswrapper[4835]: I0201 07:56:42.537829 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:56:42 crc kubenswrapper[4835]: I0201 07:56:42.539070 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 07:56:42 crc kubenswrapper[4835]: I0201 07:56:42.539107 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:56:42 crc kubenswrapper[4835]: I0201 07:56:42.539161 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" gracePeriod=30 Feb 01 07:56:42 crc kubenswrapper[4835]: I0201 07:56:42.540201 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 07:56:42 crc kubenswrapper[4835]: E0201 07:56:42.661009 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:56:43 crc kubenswrapper[4835]: I0201 07:56:43.217664 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" exitCode=0 Feb 01 07:56:43 crc kubenswrapper[4835]: I0201 07:56:43.217677 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c"} Feb 01 07:56:43 crc kubenswrapper[4835]: I0201 07:56:43.217745 4835 scope.go:117] "RemoveContainer" containerID="88b3ba4b52ecb819474ab7399ac2bb548a98b9b172fad1ad56ac2a0e2a8457e7" Feb 01 07:56:43 crc kubenswrapper[4835]: I0201 07:56:43.218330 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:56:43 crc kubenswrapper[4835]: I0201 07:56:43.218360 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:56:43 crc kubenswrapper[4835]: E0201 07:56:43.218712 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:56:47 crc kubenswrapper[4835]: I0201 07:56:47.577060 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:56:47 crc kubenswrapper[4835]: I0201 07:56:47.577979 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:56:47 crc kubenswrapper[4835]: I0201 07:56:47.578012 4835 scope.go:117] "RemoveContainer" containerID="4ba11c9f6be15acd5d3543ccf13bbfa830ab68fbb85b3cdf2888e5b0e15b8758" Feb 01 07:56:47 crc kubenswrapper[4835]: I0201 07:56:47.578118 4835 scope.go:117] "RemoveContainer" containerID="9299bf2d1843f2bf2326c5cd40b5b3e3ca4b314793c9ab4ac3d7140160844fa0" Feb 01 07:56:47 crc kubenswrapper[4835]: I0201 07:56:47.578128 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:56:47 crc kubenswrapper[4835]: I0201 07:56:47.578171 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:56:47 crc kubenswrapper[4835]: E0201 07:56:47.768973 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:56:48 crc kubenswrapper[4835]: I0201 07:56:48.276842 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"ec7f7a60f01d2f831b0a1a2281275328733630897c0d8daf5f2c4b53f8d649e9"} Feb 01 07:56:48 crc kubenswrapper[4835]: I0201 07:56:48.277874 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:56:48 crc kubenswrapper[4835]: I0201 07:56:48.277995 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:56:48 crc kubenswrapper[4835]: I0201 07:56:48.278203 4835 scope.go:117] "RemoveContainer" containerID="9299bf2d1843f2bf2326c5cd40b5b3e3ca4b314793c9ab4ac3d7140160844fa0" Feb 01 07:56:48 crc kubenswrapper[4835]: I0201 07:56:48.278229 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:56:48 crc kubenswrapper[4835]: I0201 07:56:48.278299 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:56:48 crc kubenswrapper[4835]: E0201 07:56:48.278847 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:56:50 crc kubenswrapper[4835]: I0201 07:56:50.566815 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:56:50 crc kubenswrapper[4835]: I0201 07:56:50.567155 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:56:50 crc kubenswrapper[4835]: E0201 07:56:50.567615 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:56:51 crc kubenswrapper[4835]: E0201 07:56:51.182775 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:56:51 crc kubenswrapper[4835]: E0201 07:56:51.182886 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 07:58:53.182861591 +0000 UTC m=+2206.303298055 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:56:51 crc kubenswrapper[4835]: I0201 07:56:51.182649 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:56:57 crc kubenswrapper[4835]: E0201 07:56:57.047201 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 07:56:57 crc kubenswrapper[4835]: I0201 07:56:57.354231 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:56:57 crc kubenswrapper[4835]: I0201 07:56:57.575130 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:56:57 crc kubenswrapper[4835]: I0201 07:56:57.575168 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:56:57 crc kubenswrapper[4835]: E0201 07:56:57.575437 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:57:00 crc kubenswrapper[4835]: I0201 07:57:00.567068 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:57:00 crc kubenswrapper[4835]: I0201 07:57:00.567362 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:57:00 crc kubenswrapper[4835]: I0201 07:57:00.567452 4835 scope.go:117] "RemoveContainer" containerID="9299bf2d1843f2bf2326c5cd40b5b3e3ca4b314793c9ab4ac3d7140160844fa0" Feb 01 07:57:00 crc kubenswrapper[4835]: I0201 07:57:00.567460 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:57:00 crc kubenswrapper[4835]: I0201 07:57:00.567491 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:57:00 crc kubenswrapper[4835]: E0201 07:57:00.567773 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:57:04 crc kubenswrapper[4835]: I0201 07:57:04.566837 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:57:04 crc kubenswrapper[4835]: I0201 07:57:04.568897 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:57:04 crc kubenswrapper[4835]: E0201 07:57:04.569665 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:57:08 crc kubenswrapper[4835]: I0201 07:57:08.567240 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:57:08 crc kubenswrapper[4835]: I0201 07:57:08.567741 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:57:08 crc kubenswrapper[4835]: E0201 07:57:08.568141 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:57:12 crc kubenswrapper[4835]: I0201 07:57:12.567307 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:57:12 crc kubenswrapper[4835]: I0201 07:57:12.567663 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:57:12 crc kubenswrapper[4835]: I0201 07:57:12.567739 4835 scope.go:117] "RemoveContainer" containerID="9299bf2d1843f2bf2326c5cd40b5b3e3ca4b314793c9ab4ac3d7140160844fa0" Feb 01 07:57:12 crc kubenswrapper[4835]: I0201 07:57:12.567746 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:57:12 crc kubenswrapper[4835]: I0201 07:57:12.567775 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:57:12 crc kubenswrapper[4835]: E0201 07:57:12.568044 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:57:17 crc kubenswrapper[4835]: I0201 07:57:17.578858 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:57:17 crc kubenswrapper[4835]: I0201 07:57:17.579220 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:57:17 crc kubenswrapper[4835]: E0201 07:57:17.579637 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:57:22 crc kubenswrapper[4835]: I0201 07:57:22.567628 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:57:22 crc kubenswrapper[4835]: I0201 07:57:22.567668 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:57:22 crc kubenswrapper[4835]: E0201 07:57:22.567961 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:57:24 crc kubenswrapper[4835]: I0201 07:57:24.567042 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:57:24 crc kubenswrapper[4835]: I0201 07:57:24.567558 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:57:24 crc kubenswrapper[4835]: I0201 07:57:24.567710 4835 scope.go:117] "RemoveContainer" containerID="9299bf2d1843f2bf2326c5cd40b5b3e3ca4b314793c9ab4ac3d7140160844fa0" Feb 01 07:57:24 crc kubenswrapper[4835]: I0201 07:57:24.567725 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:57:24 crc kubenswrapper[4835]: I0201 07:57:24.567789 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:57:24 crc kubenswrapper[4835]: E0201 07:57:24.568323 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:57:31 crc kubenswrapper[4835]: I0201 07:57:31.567036 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:57:31 crc kubenswrapper[4835]: I0201 07:57:31.567748 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:57:31 crc kubenswrapper[4835]: E0201 07:57:31.568223 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:57:35 crc kubenswrapper[4835]: I0201 07:57:35.567122 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:57:35 crc kubenswrapper[4835]: I0201 07:57:35.567717 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:57:35 crc kubenswrapper[4835]: E0201 07:57:35.568094 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:57:36 crc kubenswrapper[4835]: I0201 07:57:36.567754 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:57:36 crc kubenswrapper[4835]: I0201 07:57:36.567861 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:57:36 crc kubenswrapper[4835]: I0201 07:57:36.567961 4835 scope.go:117] "RemoveContainer" containerID="9299bf2d1843f2bf2326c5cd40b5b3e3ca4b314793c9ab4ac3d7140160844fa0" Feb 01 07:57:36 crc kubenswrapper[4835]: I0201 07:57:36.567970 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:57:36 crc kubenswrapper[4835]: I0201 07:57:36.568012 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:57:36 crc kubenswrapper[4835]: E0201 07:57:36.568548 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.396564 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-58fdf"] Feb 01 07:57:41 crc kubenswrapper[4835]: E0201 07:57:41.397308 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124384c0-3e99-4689-bccb-5f0d29df89ee" containerName="extract-utilities" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.397320 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="124384c0-3e99-4689-bccb-5f0d29df89ee" containerName="extract-utilities" Feb 01 07:57:41 crc kubenswrapper[4835]: E0201 07:57:41.397343 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124384c0-3e99-4689-bccb-5f0d29df89ee" containerName="extract-content" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.397349 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="124384c0-3e99-4689-bccb-5f0d29df89ee" containerName="extract-content" Feb 01 07:57:41 crc kubenswrapper[4835]: E0201 07:57:41.397363 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124384c0-3e99-4689-bccb-5f0d29df89ee" containerName="registry-server" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.397370 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="124384c0-3e99-4689-bccb-5f0d29df89ee" containerName="registry-server" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.397529 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="124384c0-3e99-4689-bccb-5f0d29df89ee" containerName="registry-server" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.398586 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.417977 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-58fdf"] Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.438232 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1abf7dc2-505b-4eb6-836c-fd043219944a-catalog-content\") pod \"redhat-marketplace-58fdf\" (UID: \"1abf7dc2-505b-4eb6-836c-fd043219944a\") " pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.438492 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dfbh\" (UniqueName: \"kubernetes.io/projected/1abf7dc2-505b-4eb6-836c-fd043219944a-kube-api-access-2dfbh\") pod \"redhat-marketplace-58fdf\" (UID: \"1abf7dc2-505b-4eb6-836c-fd043219944a\") " pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.438559 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1abf7dc2-505b-4eb6-836c-fd043219944a-utilities\") pod \"redhat-marketplace-58fdf\" (UID: \"1abf7dc2-505b-4eb6-836c-fd043219944a\") " pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.540206 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1abf7dc2-505b-4eb6-836c-fd043219944a-utilities\") pod \"redhat-marketplace-58fdf\" (UID: \"1abf7dc2-505b-4eb6-836c-fd043219944a\") " pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.540379 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1abf7dc2-505b-4eb6-836c-fd043219944a-catalog-content\") pod \"redhat-marketplace-58fdf\" (UID: \"1abf7dc2-505b-4eb6-836c-fd043219944a\") " pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.540527 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dfbh\" (UniqueName: \"kubernetes.io/projected/1abf7dc2-505b-4eb6-836c-fd043219944a-kube-api-access-2dfbh\") pod \"redhat-marketplace-58fdf\" (UID: \"1abf7dc2-505b-4eb6-836c-fd043219944a\") " pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.541219 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1abf7dc2-505b-4eb6-836c-fd043219944a-utilities\") pod \"redhat-marketplace-58fdf\" (UID: \"1abf7dc2-505b-4eb6-836c-fd043219944a\") " pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.541248 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1abf7dc2-505b-4eb6-836c-fd043219944a-catalog-content\") pod \"redhat-marketplace-58fdf\" (UID: \"1abf7dc2-505b-4eb6-836c-fd043219944a\") " pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.571366 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dfbh\" (UniqueName: \"kubernetes.io/projected/1abf7dc2-505b-4eb6-836c-fd043219944a-kube-api-access-2dfbh\") pod \"redhat-marketplace-58fdf\" (UID: \"1abf7dc2-505b-4eb6-836c-fd043219944a\") " pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:41 crc kubenswrapper[4835]: I0201 07:57:41.726519 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:42 crc kubenswrapper[4835]: I0201 07:57:42.308273 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-58fdf"] Feb 01 07:57:42 crc kubenswrapper[4835]: W0201 07:57:42.325581 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1abf7dc2_505b_4eb6_836c_fd043219944a.slice/crio-c4e4a369e497853783e3c5fa4192de067cb04dd84f4519b87dc3e490de38fa16 WatchSource:0}: Error finding container c4e4a369e497853783e3c5fa4192de067cb04dd84f4519b87dc3e490de38fa16: Status 404 returned error can't find the container with id c4e4a369e497853783e3c5fa4192de067cb04dd84f4519b87dc3e490de38fa16 Feb 01 07:57:43 crc kubenswrapper[4835]: I0201 07:57:43.189765 4835 generic.go:334] "Generic (PLEG): container finished" podID="1abf7dc2-505b-4eb6-836c-fd043219944a" containerID="856838ddd8cd46bd677eb5062bf4ac8f8b3b3a344864ea891a6565a12ea8b139" exitCode=0 Feb 01 07:57:43 crc kubenswrapper[4835]: I0201 07:57:43.189856 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-58fdf" event={"ID":"1abf7dc2-505b-4eb6-836c-fd043219944a","Type":"ContainerDied","Data":"856838ddd8cd46bd677eb5062bf4ac8f8b3b3a344864ea891a6565a12ea8b139"} Feb 01 07:57:43 crc kubenswrapper[4835]: I0201 07:57:43.191620 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-58fdf" event={"ID":"1abf7dc2-505b-4eb6-836c-fd043219944a","Type":"ContainerStarted","Data":"c4e4a369e497853783e3c5fa4192de067cb04dd84f4519b87dc3e490de38fa16"} Feb 01 07:57:43 crc kubenswrapper[4835]: I0201 07:57:43.192370 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 01 07:57:43 crc kubenswrapper[4835]: I0201 07:57:43.566692 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:57:43 crc kubenswrapper[4835]: I0201 07:57:43.566716 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:57:43 crc kubenswrapper[4835]: E0201 07:57:43.566913 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:57:45 crc kubenswrapper[4835]: I0201 07:57:45.211865 4835 generic.go:334] "Generic (PLEG): container finished" podID="1abf7dc2-505b-4eb6-836c-fd043219944a" containerID="be17d0f994972be23910e99f13ee137255fcdc6e2356b12626ab9c1a36408e23" exitCode=0 Feb 01 07:57:45 crc kubenswrapper[4835]: I0201 07:57:45.213053 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-58fdf" event={"ID":"1abf7dc2-505b-4eb6-836c-fd043219944a","Type":"ContainerDied","Data":"be17d0f994972be23910e99f13ee137255fcdc6e2356b12626ab9c1a36408e23"} Feb 01 07:57:46 crc kubenswrapper[4835]: I0201 07:57:46.226191 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-58fdf" event={"ID":"1abf7dc2-505b-4eb6-836c-fd043219944a","Type":"ContainerStarted","Data":"80e1b9d32dd9a0b305777f5c7c8f33f8d920c4eeb6d1991e71bde8ae2323a732"} Feb 01 07:57:49 crc kubenswrapper[4835]: I0201 07:57:49.566752 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:57:49 crc kubenswrapper[4835]: I0201 07:57:49.567382 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:57:49 crc kubenswrapper[4835]: E0201 07:57:49.567844 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:57:51 crc kubenswrapper[4835]: I0201 07:57:51.568344 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:57:51 crc kubenswrapper[4835]: I0201 07:57:51.568520 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:57:51 crc kubenswrapper[4835]: I0201 07:57:51.568676 4835 scope.go:117] "RemoveContainer" containerID="9299bf2d1843f2bf2326c5cd40b5b3e3ca4b314793c9ab4ac3d7140160844fa0" Feb 01 07:57:51 crc kubenswrapper[4835]: I0201 07:57:51.568689 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:57:51 crc kubenswrapper[4835]: I0201 07:57:51.568752 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:57:51 crc kubenswrapper[4835]: E0201 07:57:51.569331 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:57:51 crc kubenswrapper[4835]: I0201 07:57:51.727518 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:51 crc kubenswrapper[4835]: I0201 07:57:51.727803 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:51 crc kubenswrapper[4835]: I0201 07:57:51.789473 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:51 crc kubenswrapper[4835]: I0201 07:57:51.805288 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-58fdf" podStartSLOduration=8.235505638 podStartE2EDuration="10.805270554s" podCreationTimestamp="2026-02-01 07:57:41 +0000 UTC" firstStartedPulling="2026-02-01 07:57:43.191801066 +0000 UTC m=+2136.312237540" lastFinishedPulling="2026-02-01 07:57:45.761565982 +0000 UTC m=+2138.882002456" observedRunningTime="2026-02-01 07:57:46.255977679 +0000 UTC m=+2139.376414163" watchObservedRunningTime="2026-02-01 07:57:51.805270554 +0000 UTC m=+2144.925706988" Feb 01 07:57:52 crc kubenswrapper[4835]: I0201 07:57:52.327027 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.191853 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.191941 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.393625 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-58fdf"] Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.394233 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-58fdf" podUID="1abf7dc2-505b-4eb6-836c-fd043219944a" containerName="registry-server" containerID="cri-o://80e1b9d32dd9a0b305777f5c7c8f33f8d920c4eeb6d1991e71bde8ae2323a732" gracePeriod=2 Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.778182 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.885958 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1abf7dc2-505b-4eb6-836c-fd043219944a-utilities\") pod \"1abf7dc2-505b-4eb6-836c-fd043219944a\" (UID: \"1abf7dc2-505b-4eb6-836c-fd043219944a\") " Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.886079 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dfbh\" (UniqueName: \"kubernetes.io/projected/1abf7dc2-505b-4eb6-836c-fd043219944a-kube-api-access-2dfbh\") pod \"1abf7dc2-505b-4eb6-836c-fd043219944a\" (UID: \"1abf7dc2-505b-4eb6-836c-fd043219944a\") " Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.886175 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1abf7dc2-505b-4eb6-836c-fd043219944a-catalog-content\") pod \"1abf7dc2-505b-4eb6-836c-fd043219944a\" (UID: \"1abf7dc2-505b-4eb6-836c-fd043219944a\") " Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.888186 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1abf7dc2-505b-4eb6-836c-fd043219944a-utilities" (OuterVolumeSpecName: "utilities") pod "1abf7dc2-505b-4eb6-836c-fd043219944a" (UID: "1abf7dc2-505b-4eb6-836c-fd043219944a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.892559 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1abf7dc2-505b-4eb6-836c-fd043219944a-kube-api-access-2dfbh" (OuterVolumeSpecName: "kube-api-access-2dfbh") pod "1abf7dc2-505b-4eb6-836c-fd043219944a" (UID: "1abf7dc2-505b-4eb6-836c-fd043219944a"). InnerVolumeSpecName "kube-api-access-2dfbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.910719 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1abf7dc2-505b-4eb6-836c-fd043219944a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1abf7dc2-505b-4eb6-836c-fd043219944a" (UID: "1abf7dc2-505b-4eb6-836c-fd043219944a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.988099 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1abf7dc2-505b-4eb6-836c-fd043219944a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.988374 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1abf7dc2-505b-4eb6-836c-fd043219944a-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:57:55 crc kubenswrapper[4835]: I0201 07:57:55.988505 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dfbh\" (UniqueName: \"kubernetes.io/projected/1abf7dc2-505b-4eb6-836c-fd043219944a-kube-api-access-2dfbh\") on node \"crc\" DevicePath \"\"" Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.316929 4835 generic.go:334] "Generic (PLEG): container finished" podID="1abf7dc2-505b-4eb6-836c-fd043219944a" containerID="80e1b9d32dd9a0b305777f5c7c8f33f8d920c4eeb6d1991e71bde8ae2323a732" exitCode=0 Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.316981 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-58fdf" event={"ID":"1abf7dc2-505b-4eb6-836c-fd043219944a","Type":"ContainerDied","Data":"80e1b9d32dd9a0b305777f5c7c8f33f8d920c4eeb6d1991e71bde8ae2323a732"} Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.317012 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-58fdf" event={"ID":"1abf7dc2-505b-4eb6-836c-fd043219944a","Type":"ContainerDied","Data":"c4e4a369e497853783e3c5fa4192de067cb04dd84f4519b87dc3e490de38fa16"} Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.317034 4835 scope.go:117] "RemoveContainer" containerID="80e1b9d32dd9a0b305777f5c7c8f33f8d920c4eeb6d1991e71bde8ae2323a732" Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.317108 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-58fdf" Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.342392 4835 scope.go:117] "RemoveContainer" containerID="be17d0f994972be23910e99f13ee137255fcdc6e2356b12626ab9c1a36408e23" Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.366550 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-58fdf"] Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.379557 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-58fdf"] Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.394792 4835 scope.go:117] "RemoveContainer" containerID="856838ddd8cd46bd677eb5062bf4ac8f8b3b3a344864ea891a6565a12ea8b139" Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.433187 4835 scope.go:117] "RemoveContainer" containerID="80e1b9d32dd9a0b305777f5c7c8f33f8d920c4eeb6d1991e71bde8ae2323a732" Feb 01 07:57:56 crc kubenswrapper[4835]: E0201 07:57:56.433813 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80e1b9d32dd9a0b305777f5c7c8f33f8d920c4eeb6d1991e71bde8ae2323a732\": container with ID starting with 80e1b9d32dd9a0b305777f5c7c8f33f8d920c4eeb6d1991e71bde8ae2323a732 not found: ID does not exist" containerID="80e1b9d32dd9a0b305777f5c7c8f33f8d920c4eeb6d1991e71bde8ae2323a732" Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.433853 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80e1b9d32dd9a0b305777f5c7c8f33f8d920c4eeb6d1991e71bde8ae2323a732"} err="failed to get container status \"80e1b9d32dd9a0b305777f5c7c8f33f8d920c4eeb6d1991e71bde8ae2323a732\": rpc error: code = NotFound desc = could not find container \"80e1b9d32dd9a0b305777f5c7c8f33f8d920c4eeb6d1991e71bde8ae2323a732\": container with ID starting with 80e1b9d32dd9a0b305777f5c7c8f33f8d920c4eeb6d1991e71bde8ae2323a732 not found: ID does not exist" Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.433884 4835 scope.go:117] "RemoveContainer" containerID="be17d0f994972be23910e99f13ee137255fcdc6e2356b12626ab9c1a36408e23" Feb 01 07:57:56 crc kubenswrapper[4835]: E0201 07:57:56.434261 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be17d0f994972be23910e99f13ee137255fcdc6e2356b12626ab9c1a36408e23\": container with ID starting with be17d0f994972be23910e99f13ee137255fcdc6e2356b12626ab9c1a36408e23 not found: ID does not exist" containerID="be17d0f994972be23910e99f13ee137255fcdc6e2356b12626ab9c1a36408e23" Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.434297 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be17d0f994972be23910e99f13ee137255fcdc6e2356b12626ab9c1a36408e23"} err="failed to get container status \"be17d0f994972be23910e99f13ee137255fcdc6e2356b12626ab9c1a36408e23\": rpc error: code = NotFound desc = could not find container \"be17d0f994972be23910e99f13ee137255fcdc6e2356b12626ab9c1a36408e23\": container with ID starting with be17d0f994972be23910e99f13ee137255fcdc6e2356b12626ab9c1a36408e23 not found: ID does not exist" Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.434311 4835 scope.go:117] "RemoveContainer" containerID="856838ddd8cd46bd677eb5062bf4ac8f8b3b3a344864ea891a6565a12ea8b139" Feb 01 07:57:56 crc kubenswrapper[4835]: E0201 07:57:56.434634 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"856838ddd8cd46bd677eb5062bf4ac8f8b3b3a344864ea891a6565a12ea8b139\": container with ID starting with 856838ddd8cd46bd677eb5062bf4ac8f8b3b3a344864ea891a6565a12ea8b139 not found: ID does not exist" containerID="856838ddd8cd46bd677eb5062bf4ac8f8b3b3a344864ea891a6565a12ea8b139" Feb 01 07:57:56 crc kubenswrapper[4835]: I0201 07:57:56.434658 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"856838ddd8cd46bd677eb5062bf4ac8f8b3b3a344864ea891a6565a12ea8b139"} err="failed to get container status \"856838ddd8cd46bd677eb5062bf4ac8f8b3b3a344864ea891a6565a12ea8b139\": rpc error: code = NotFound desc = could not find container \"856838ddd8cd46bd677eb5062bf4ac8f8b3b3a344864ea891a6565a12ea8b139\": container with ID starting with 856838ddd8cd46bd677eb5062bf4ac8f8b3b3a344864ea891a6565a12ea8b139 not found: ID does not exist" Feb 01 07:57:57 crc kubenswrapper[4835]: I0201 07:57:57.578780 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1abf7dc2-505b-4eb6-836c-fd043219944a" path="/var/lib/kubelet/pods/1abf7dc2-505b-4eb6-836c-fd043219944a/volumes" Feb 01 07:57:58 crc kubenswrapper[4835]: I0201 07:57:58.566817 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:57:58 crc kubenswrapper[4835]: I0201 07:57:58.567082 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:57:58 crc kubenswrapper[4835]: E0201 07:57:58.567382 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:58:04 crc kubenswrapper[4835]: I0201 07:58:04.566826 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:58:04 crc kubenswrapper[4835]: I0201 07:58:04.567211 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:58:04 crc kubenswrapper[4835]: E0201 07:58:04.567672 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:58:05 crc kubenswrapper[4835]: I0201 07:58:05.566851 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:58:05 crc kubenswrapper[4835]: I0201 07:58:05.567284 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:58:05 crc kubenswrapper[4835]: I0201 07:58:05.567889 4835 scope.go:117] "RemoveContainer" containerID="9299bf2d1843f2bf2326c5cd40b5b3e3ca4b314793c9ab4ac3d7140160844fa0" Feb 01 07:58:05 crc kubenswrapper[4835]: I0201 07:58:05.567963 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:58:05 crc kubenswrapper[4835]: I0201 07:58:05.568126 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:58:05 crc kubenswrapper[4835]: E0201 07:58:05.720868 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:58:06 crc kubenswrapper[4835]: I0201 07:58:06.437467 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a"} Feb 01 07:58:06 crc kubenswrapper[4835]: I0201 07:58:06.438366 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:58:06 crc kubenswrapper[4835]: I0201 07:58:06.438513 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:58:06 crc kubenswrapper[4835]: I0201 07:58:06.438694 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:58:06 crc kubenswrapper[4835]: I0201 07:58:06.438767 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:58:06 crc kubenswrapper[4835]: E0201 07:58:06.439221 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.598659 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6hkcc"] Feb 01 07:58:10 crc kubenswrapper[4835]: E0201 07:58:10.599930 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abf7dc2-505b-4eb6-836c-fd043219944a" containerName="extract-utilities" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.599964 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abf7dc2-505b-4eb6-836c-fd043219944a" containerName="extract-utilities" Feb 01 07:58:10 crc kubenswrapper[4835]: E0201 07:58:10.600017 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abf7dc2-505b-4eb6-836c-fd043219944a" containerName="extract-content" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.600035 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abf7dc2-505b-4eb6-836c-fd043219944a" containerName="extract-content" Feb 01 07:58:10 crc kubenswrapper[4835]: E0201 07:58:10.600060 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abf7dc2-505b-4eb6-836c-fd043219944a" containerName="registry-server" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.600072 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abf7dc2-505b-4eb6-836c-fd043219944a" containerName="registry-server" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.600490 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1abf7dc2-505b-4eb6-836c-fd043219944a" containerName="registry-server" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.602810 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.614067 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6hkcc"] Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.738896 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d226fc0e-17db-48d6-8c00-dc71f542186d-catalog-content\") pod \"community-operators-6hkcc\" (UID: \"d226fc0e-17db-48d6-8c00-dc71f542186d\") " pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.739039 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d226fc0e-17db-48d6-8c00-dc71f542186d-utilities\") pod \"community-operators-6hkcc\" (UID: \"d226fc0e-17db-48d6-8c00-dc71f542186d\") " pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.739076 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lscrc\" (UniqueName: \"kubernetes.io/projected/d226fc0e-17db-48d6-8c00-dc71f542186d-kube-api-access-lscrc\") pod \"community-operators-6hkcc\" (UID: \"d226fc0e-17db-48d6-8c00-dc71f542186d\") " pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.839887 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d226fc0e-17db-48d6-8c00-dc71f542186d-catalog-content\") pod \"community-operators-6hkcc\" (UID: \"d226fc0e-17db-48d6-8c00-dc71f542186d\") " pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.839981 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d226fc0e-17db-48d6-8c00-dc71f542186d-utilities\") pod \"community-operators-6hkcc\" (UID: \"d226fc0e-17db-48d6-8c00-dc71f542186d\") " pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.840008 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lscrc\" (UniqueName: \"kubernetes.io/projected/d226fc0e-17db-48d6-8c00-dc71f542186d-kube-api-access-lscrc\") pod \"community-operators-6hkcc\" (UID: \"d226fc0e-17db-48d6-8c00-dc71f542186d\") " pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.840580 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d226fc0e-17db-48d6-8c00-dc71f542186d-catalog-content\") pod \"community-operators-6hkcc\" (UID: \"d226fc0e-17db-48d6-8c00-dc71f542186d\") " pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.840722 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d226fc0e-17db-48d6-8c00-dc71f542186d-utilities\") pod \"community-operators-6hkcc\" (UID: \"d226fc0e-17db-48d6-8c00-dc71f542186d\") " pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.865348 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lscrc\" (UniqueName: \"kubernetes.io/projected/d226fc0e-17db-48d6-8c00-dc71f542186d-kube-api-access-lscrc\") pod \"community-operators-6hkcc\" (UID: \"d226fc0e-17db-48d6-8c00-dc71f542186d\") " pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:10 crc kubenswrapper[4835]: I0201 07:58:10.932294 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:11 crc kubenswrapper[4835]: I0201 07:58:11.423453 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6hkcc"] Feb 01 07:58:11 crc kubenswrapper[4835]: W0201 07:58:11.433536 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd226fc0e_17db_48d6_8c00_dc71f542186d.slice/crio-0d75697c6ed7b1b3718093a971509729e46e875d4e4dbcf17374b7e01fcdc536 WatchSource:0}: Error finding container 0d75697c6ed7b1b3718093a971509729e46e875d4e4dbcf17374b7e01fcdc536: Status 404 returned error can't find the container with id 0d75697c6ed7b1b3718093a971509729e46e875d4e4dbcf17374b7e01fcdc536 Feb 01 07:58:11 crc kubenswrapper[4835]: I0201 07:58:11.482480 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hkcc" event={"ID":"d226fc0e-17db-48d6-8c00-dc71f542186d","Type":"ContainerStarted","Data":"0d75697c6ed7b1b3718093a971509729e46e875d4e4dbcf17374b7e01fcdc536"} Feb 01 07:58:12 crc kubenswrapper[4835]: I0201 07:58:12.496234 4835 generic.go:334] "Generic (PLEG): container finished" podID="d226fc0e-17db-48d6-8c00-dc71f542186d" containerID="7eb50785ddb08f0920f8d3aacda04a076160dc163f3a91dde31c864819a4b82b" exitCode=0 Feb 01 07:58:12 crc kubenswrapper[4835]: I0201 07:58:12.496347 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hkcc" event={"ID":"d226fc0e-17db-48d6-8c00-dc71f542186d","Type":"ContainerDied","Data":"7eb50785ddb08f0920f8d3aacda04a076160dc163f3a91dde31c864819a4b82b"} Feb 01 07:58:13 crc kubenswrapper[4835]: I0201 07:58:13.506216 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hkcc" event={"ID":"d226fc0e-17db-48d6-8c00-dc71f542186d","Type":"ContainerStarted","Data":"b507942fba1abd47a89c8a3acdb70022b6187ec12ffcbc9d26b28b586bed2fc7"} Feb 01 07:58:13 crc kubenswrapper[4835]: I0201 07:58:13.566667 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:58:13 crc kubenswrapper[4835]: I0201 07:58:13.566701 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:58:13 crc kubenswrapper[4835]: E0201 07:58:13.566963 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:58:14 crc kubenswrapper[4835]: I0201 07:58:14.525551 4835 generic.go:334] "Generic (PLEG): container finished" podID="d226fc0e-17db-48d6-8c00-dc71f542186d" containerID="b507942fba1abd47a89c8a3acdb70022b6187ec12ffcbc9d26b28b586bed2fc7" exitCode=0 Feb 01 07:58:14 crc kubenswrapper[4835]: I0201 07:58:14.525715 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hkcc" event={"ID":"d226fc0e-17db-48d6-8c00-dc71f542186d","Type":"ContainerDied","Data":"b507942fba1abd47a89c8a3acdb70022b6187ec12ffcbc9d26b28b586bed2fc7"} Feb 01 07:58:15 crc kubenswrapper[4835]: I0201 07:58:15.537170 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hkcc" event={"ID":"d226fc0e-17db-48d6-8c00-dc71f542186d","Type":"ContainerStarted","Data":"ec39052df6cbd1023a4a578bb32f27a8e0e16abb83feca6c0c1cc5654f468173"} Feb 01 07:58:15 crc kubenswrapper[4835]: I0201 07:58:15.558883 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6hkcc" podStartSLOduration=2.995972553 podStartE2EDuration="5.558865492s" podCreationTimestamp="2026-02-01 07:58:10 +0000 UTC" firstStartedPulling="2026-02-01 07:58:12.499833401 +0000 UTC m=+2165.620269865" lastFinishedPulling="2026-02-01 07:58:15.06272633 +0000 UTC m=+2168.183162804" observedRunningTime="2026-02-01 07:58:15.556027968 +0000 UTC m=+2168.676464412" watchObservedRunningTime="2026-02-01 07:58:15.558865492 +0000 UTC m=+2168.679301936" Feb 01 07:58:18 crc kubenswrapper[4835]: I0201 07:58:18.567273 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:58:18 crc kubenswrapper[4835]: I0201 07:58:18.567970 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:58:18 crc kubenswrapper[4835]: E0201 07:58:18.568461 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:58:20 crc kubenswrapper[4835]: I0201 07:58:20.932698 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:20 crc kubenswrapper[4835]: I0201 07:58:20.932992 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:20 crc kubenswrapper[4835]: I0201 07:58:20.985850 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:21 crc kubenswrapper[4835]: I0201 07:58:21.567920 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:58:21 crc kubenswrapper[4835]: I0201 07:58:21.568068 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:58:21 crc kubenswrapper[4835]: I0201 07:58:21.568264 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:58:21 crc kubenswrapper[4835]: I0201 07:58:21.568336 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:58:21 crc kubenswrapper[4835]: E0201 07:58:21.568951 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:58:21 crc kubenswrapper[4835]: I0201 07:58:21.643717 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:21 crc kubenswrapper[4835]: I0201 07:58:21.695969 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6hkcc"] Feb 01 07:58:23 crc kubenswrapper[4835]: I0201 07:58:23.611517 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6hkcc" podUID="d226fc0e-17db-48d6-8c00-dc71f542186d" containerName="registry-server" containerID="cri-o://ec39052df6cbd1023a4a578bb32f27a8e0e16abb83feca6c0c1cc5654f468173" gracePeriod=2 Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.061433 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.175996 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lscrc\" (UniqueName: \"kubernetes.io/projected/d226fc0e-17db-48d6-8c00-dc71f542186d-kube-api-access-lscrc\") pod \"d226fc0e-17db-48d6-8c00-dc71f542186d\" (UID: \"d226fc0e-17db-48d6-8c00-dc71f542186d\") " Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.176074 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d226fc0e-17db-48d6-8c00-dc71f542186d-utilities\") pod \"d226fc0e-17db-48d6-8c00-dc71f542186d\" (UID: \"d226fc0e-17db-48d6-8c00-dc71f542186d\") " Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.176117 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d226fc0e-17db-48d6-8c00-dc71f542186d-catalog-content\") pod \"d226fc0e-17db-48d6-8c00-dc71f542186d\" (UID: \"d226fc0e-17db-48d6-8c00-dc71f542186d\") " Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.177563 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d226fc0e-17db-48d6-8c00-dc71f542186d-utilities" (OuterVolumeSpecName: "utilities") pod "d226fc0e-17db-48d6-8c00-dc71f542186d" (UID: "d226fc0e-17db-48d6-8c00-dc71f542186d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.188474 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d226fc0e-17db-48d6-8c00-dc71f542186d-kube-api-access-lscrc" (OuterVolumeSpecName: "kube-api-access-lscrc") pod "d226fc0e-17db-48d6-8c00-dc71f542186d" (UID: "d226fc0e-17db-48d6-8c00-dc71f542186d"). InnerVolumeSpecName "kube-api-access-lscrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.237273 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d226fc0e-17db-48d6-8c00-dc71f542186d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d226fc0e-17db-48d6-8c00-dc71f542186d" (UID: "d226fc0e-17db-48d6-8c00-dc71f542186d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.278498 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d226fc0e-17db-48d6-8c00-dc71f542186d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.278538 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lscrc\" (UniqueName: \"kubernetes.io/projected/d226fc0e-17db-48d6-8c00-dc71f542186d-kube-api-access-lscrc\") on node \"crc\" DevicePath \"\"" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.278572 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d226fc0e-17db-48d6-8c00-dc71f542186d-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.622606 4835 generic.go:334] "Generic (PLEG): container finished" podID="d226fc0e-17db-48d6-8c00-dc71f542186d" containerID="ec39052df6cbd1023a4a578bb32f27a8e0e16abb83feca6c0c1cc5654f468173" exitCode=0 Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.622671 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hkcc" event={"ID":"d226fc0e-17db-48d6-8c00-dc71f542186d","Type":"ContainerDied","Data":"ec39052df6cbd1023a4a578bb32f27a8e0e16abb83feca6c0c1cc5654f468173"} Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.622711 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hkcc" event={"ID":"d226fc0e-17db-48d6-8c00-dc71f542186d","Type":"ContainerDied","Data":"0d75697c6ed7b1b3718093a971509729e46e875d4e4dbcf17374b7e01fcdc536"} Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.622739 4835 scope.go:117] "RemoveContainer" containerID="ec39052df6cbd1023a4a578bb32f27a8e0e16abb83feca6c0c1cc5654f468173" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.624386 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hkcc" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.661828 4835 scope.go:117] "RemoveContainer" containerID="b507942fba1abd47a89c8a3acdb70022b6187ec12ffcbc9d26b28b586bed2fc7" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.688508 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6hkcc"] Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.695137 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6hkcc"] Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.709980 4835 scope.go:117] "RemoveContainer" containerID="7eb50785ddb08f0920f8d3aacda04a076160dc163f3a91dde31c864819a4b82b" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.747134 4835 scope.go:117] "RemoveContainer" containerID="ec39052df6cbd1023a4a578bb32f27a8e0e16abb83feca6c0c1cc5654f468173" Feb 01 07:58:24 crc kubenswrapper[4835]: E0201 07:58:24.747897 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec39052df6cbd1023a4a578bb32f27a8e0e16abb83feca6c0c1cc5654f468173\": container with ID starting with ec39052df6cbd1023a4a578bb32f27a8e0e16abb83feca6c0c1cc5654f468173 not found: ID does not exist" containerID="ec39052df6cbd1023a4a578bb32f27a8e0e16abb83feca6c0c1cc5654f468173" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.747999 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec39052df6cbd1023a4a578bb32f27a8e0e16abb83feca6c0c1cc5654f468173"} err="failed to get container status \"ec39052df6cbd1023a4a578bb32f27a8e0e16abb83feca6c0c1cc5654f468173\": rpc error: code = NotFound desc = could not find container \"ec39052df6cbd1023a4a578bb32f27a8e0e16abb83feca6c0c1cc5654f468173\": container with ID starting with ec39052df6cbd1023a4a578bb32f27a8e0e16abb83feca6c0c1cc5654f468173 not found: ID does not exist" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.748282 4835 scope.go:117] "RemoveContainer" containerID="b507942fba1abd47a89c8a3acdb70022b6187ec12ffcbc9d26b28b586bed2fc7" Feb 01 07:58:24 crc kubenswrapper[4835]: E0201 07:58:24.748916 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b507942fba1abd47a89c8a3acdb70022b6187ec12ffcbc9d26b28b586bed2fc7\": container with ID starting with b507942fba1abd47a89c8a3acdb70022b6187ec12ffcbc9d26b28b586bed2fc7 not found: ID does not exist" containerID="b507942fba1abd47a89c8a3acdb70022b6187ec12ffcbc9d26b28b586bed2fc7" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.748971 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b507942fba1abd47a89c8a3acdb70022b6187ec12ffcbc9d26b28b586bed2fc7"} err="failed to get container status \"b507942fba1abd47a89c8a3acdb70022b6187ec12ffcbc9d26b28b586bed2fc7\": rpc error: code = NotFound desc = could not find container \"b507942fba1abd47a89c8a3acdb70022b6187ec12ffcbc9d26b28b586bed2fc7\": container with ID starting with b507942fba1abd47a89c8a3acdb70022b6187ec12ffcbc9d26b28b586bed2fc7 not found: ID does not exist" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.749010 4835 scope.go:117] "RemoveContainer" containerID="7eb50785ddb08f0920f8d3aacda04a076160dc163f3a91dde31c864819a4b82b" Feb 01 07:58:24 crc kubenswrapper[4835]: E0201 07:58:24.749468 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eb50785ddb08f0920f8d3aacda04a076160dc163f3a91dde31c864819a4b82b\": container with ID starting with 7eb50785ddb08f0920f8d3aacda04a076160dc163f3a91dde31c864819a4b82b not found: ID does not exist" containerID="7eb50785ddb08f0920f8d3aacda04a076160dc163f3a91dde31c864819a4b82b" Feb 01 07:58:24 crc kubenswrapper[4835]: I0201 07:58:24.749512 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eb50785ddb08f0920f8d3aacda04a076160dc163f3a91dde31c864819a4b82b"} err="failed to get container status \"7eb50785ddb08f0920f8d3aacda04a076160dc163f3a91dde31c864819a4b82b\": rpc error: code = NotFound desc = could not find container \"7eb50785ddb08f0920f8d3aacda04a076160dc163f3a91dde31c864819a4b82b\": container with ID starting with 7eb50785ddb08f0920f8d3aacda04a076160dc163f3a91dde31c864819a4b82b not found: ID does not exist" Feb 01 07:58:25 crc kubenswrapper[4835]: I0201 07:58:25.191790 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:58:25 crc kubenswrapper[4835]: I0201 07:58:25.192188 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:58:25 crc kubenswrapper[4835]: I0201 07:58:25.583662 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d226fc0e-17db-48d6-8c00-dc71f542186d" path="/var/lib/kubelet/pods/d226fc0e-17db-48d6-8c00-dc71f542186d/volumes" Feb 01 07:58:28 crc kubenswrapper[4835]: I0201 07:58:28.566981 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:58:28 crc kubenswrapper[4835]: I0201 07:58:28.567276 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:58:28 crc kubenswrapper[4835]: E0201 07:58:28.567574 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:58:29 crc kubenswrapper[4835]: I0201 07:58:29.567264 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:58:29 crc kubenswrapper[4835]: I0201 07:58:29.567314 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:58:29 crc kubenswrapper[4835]: E0201 07:58:29.567866 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.639260 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Feb 01 07:58:31 crc kubenswrapper[4835]: E0201 07:58:31.639995 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d226fc0e-17db-48d6-8c00-dc71f542186d" containerName="registry-server" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.640008 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d226fc0e-17db-48d6-8c00-dc71f542186d" containerName="registry-server" Feb 01 07:58:31 crc kubenswrapper[4835]: E0201 07:58:31.640032 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d226fc0e-17db-48d6-8c00-dc71f542186d" containerName="extract-content" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.640038 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d226fc0e-17db-48d6-8c00-dc71f542186d" containerName="extract-content" Feb 01 07:58:31 crc kubenswrapper[4835]: E0201 07:58:31.640053 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d226fc0e-17db-48d6-8c00-dc71f542186d" containerName="extract-utilities" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.640060 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d226fc0e-17db-48d6-8c00-dc71f542186d" containerName="extract-utilities" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.640197 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="d226fc0e-17db-48d6-8c00-dc71f542186d" containerName="registry-server" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.644289 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.652814 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.657752 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.662497 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.669930 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.708466 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.708518 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef-lock\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.708620 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s6fv\" (UniqueName: \"kubernetes.io/projected/69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef-kube-api-access-2s6fv\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.708723 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef-etc-swift\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.708779 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef-cache\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.810418 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/559d52a7-a172-4c3c-aa13-ba07036485e1-lock\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.810589 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.810676 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef-lock\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.810754 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6slpq\" (UniqueName: \"kubernetes.io/projected/559d52a7-a172-4c3c-aa13-ba07036485e1-kube-api-access-6slpq\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.810842 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/559d52a7-a172-4c3c-aa13-ba07036485e1-cache\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.810911 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/559d52a7-a172-4c3c-aa13-ba07036485e1-etc-swift\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.810978 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s6fv\" (UniqueName: \"kubernetes.io/projected/69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef-kube-api-access-2s6fv\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.811048 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.811093 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") device mount path \"/mnt/openstack/pv02\"" pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.811425 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef-etc-swift\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.811501 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef-cache\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.811567 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef-lock\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.811887 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef-cache\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.818883 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef-etc-swift\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.836146 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s6fv\" (UniqueName: \"kubernetes.io/projected/69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef-kube-api-access-2s6fv\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.841920 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-2\" (UID: \"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef\") " pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.913131 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6slpq\" (UniqueName: \"kubernetes.io/projected/559d52a7-a172-4c3c-aa13-ba07036485e1-kube-api-access-6slpq\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.913226 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/559d52a7-a172-4c3c-aa13-ba07036485e1-cache\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.913259 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/559d52a7-a172-4c3c-aa13-ba07036485e1-etc-swift\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.913305 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.913663 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") device mount path \"/mnt/openstack/pv11\"" pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.913855 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/559d52a7-a172-4c3c-aa13-ba07036485e1-lock\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.913914 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/559d52a7-a172-4c3c-aa13-ba07036485e1-cache\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.914261 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/559d52a7-a172-4c3c-aa13-ba07036485e1-lock\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.919325 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/559d52a7-a172-4c3c-aa13-ba07036485e1-etc-swift\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.929058 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6slpq\" (UniqueName: \"kubernetes.io/projected/559d52a7-a172-4c3c-aa13-ba07036485e1-kube-api-access-6slpq\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.936272 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-1\" (UID: \"559d52a7-a172-4c3c-aa13-ba07036485e1\") " pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.974856 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Feb 01 07:58:31 crc kubenswrapper[4835]: I0201 07:58:31.994563 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Feb 01 07:58:32 crc kubenswrapper[4835]: I0201 07:58:32.432829 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Feb 01 07:58:32 crc kubenswrapper[4835]: W0201 07:58:32.436300 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69f0354b_0c3b_4bc5_8aeb_0ac1b59ff0ef.slice/crio-99ff17610b1e1b9be9681d8ef2fc9cdc8c877d7b83e44644c4d649538db9d9e3 WatchSource:0}: Error finding container 99ff17610b1e1b9be9681d8ef2fc9cdc8c877d7b83e44644c4d649538db9d9e3: Status 404 returned error can't find the container with id 99ff17610b1e1b9be9681d8ef2fc9cdc8c877d7b83e44644c4d649538db9d9e3 Feb 01 07:58:32 crc kubenswrapper[4835]: I0201 07:58:32.496786 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Feb 01 07:58:32 crc kubenswrapper[4835]: W0201 07:58:32.504454 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod559d52a7_a172_4c3c_aa13_ba07036485e1.slice/crio-c50f73e90465a9f88dcf982411fc13d4f79db51edc0156b9af41c0cdc105aa6d WatchSource:0}: Error finding container c50f73e90465a9f88dcf982411fc13d4f79db51edc0156b9af41c0cdc105aa6d: Status 404 returned error can't find the container with id c50f73e90465a9f88dcf982411fc13d4f79db51edc0156b9af41c0cdc105aa6d Feb 01 07:58:32 crc kubenswrapper[4835]: I0201 07:58:32.695311 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"f85c49949ac82b12041efeed7d52e54767d284cb9e6eafea6814ad49ca6946f1"} Feb 01 07:58:32 crc kubenswrapper[4835]: I0201 07:58:32.695691 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"99ff17610b1e1b9be9681d8ef2fc9cdc8c877d7b83e44644c4d649538db9d9e3"} Feb 01 07:58:32 crc kubenswrapper[4835]: I0201 07:58:32.696992 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"7a282d6168e4fc22af855060c7caa16d3d89996ec7fca709802d564c7d5cb413"} Feb 01 07:58:32 crc kubenswrapper[4835]: I0201 07:58:32.697011 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"c50f73e90465a9f88dcf982411fc13d4f79db51edc0156b9af41c0cdc105aa6d"} Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.567584 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.567966 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.568076 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.568121 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:58:33 crc kubenswrapper[4835]: E0201 07:58:33.568511 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.765227 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="f24e7d5b54eea247b82f3883cc21f16e7aa4caa6af0dc8bcd36658dc6d2f42ef" exitCode=1 Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.765321 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"7b0c5697053a758241dec5fcbbdb0fbd6ae70937550858c99a917a5f0400fb2b"} Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.766095 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"880d86ec1afbb3c274c816bb68775340a96b7442c4264f02a07362912972f0ed"} Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.766113 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"da897bf1ebee01d787510151130bec56d89c6ce450cd58a55459700162acb7fa"} Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.766121 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"120bf873bececff664e4891e3314412e08fc9b1b04b2e1e12619c10e5426be9f"} Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.766129 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"9159bb986e366f56235b4cb7c77f57f48ee8b200c8fdf1bf6d336ca6aea3ab82"} Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.766137 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"f24e7d5b54eea247b82f3883cc21f16e7aa4caa6af0dc8bcd36658dc6d2f42ef"} Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.774444 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="a91c7a061710f366c74c3a85530795267a7148635dce19cc596c818cd545af65" exitCode=1 Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.774510 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"c8528709d62b403036c46d245ab545d72f4c72dae556c7cd913a6c522309b8ad"} Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.774543 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"14371dcc51f78107e65f9620009454edccc5ceff157a028a257c1b7e1dca7708"} Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.774572 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"a46c150ba0c28eb42d4905368d97aab983837ab66f9e5a8ae77b8c4533dcad42"} Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.774584 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"b5152f9e83944918ad324cd62f5a8c4e86da92d9f0ed14b5ed68ca341697958e"} Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.774597 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"88f1b6e5372263b6fc301fab8360c8f51cba9897427d8e0bd5f56491d1eda3f1"} Feb 01 07:58:33 crc kubenswrapper[4835]: I0201 07:58:33.774608 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"a91c7a061710f366c74c3a85530795267a7148635dce19cc596c818cd545af65"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.789049 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="880d86ec1afbb3c274c816bb68775340a96b7442c4264f02a07362912972f0ed" exitCode=1 Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.789106 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"811dcfbbfbce2457a26cf2cfd3d7f241f223d0bd48897b5e6e54984050426b01"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.789444 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"880d86ec1afbb3c274c816bb68775340a96b7442c4264f02a07362912972f0ed"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.789461 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"3df11a0d072268a6c13e002e29aaa9b6f3829b109ad04c2d2218966599a07de2"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.789471 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"e402fc4a3964869718aa6b942855005121c18ce735e11a2370dece42f35ad879"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.789479 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"c79ff7541114600de37a172509eea1cb11eec93c315c86aafccf0b9d756e98ea"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.789487 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"d5f6ed83daa849a5b58623246bbc78ae3ac07884192fec0a775d0522275c259c"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.789494 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"6a0b9399ae0be08a113e4bd1d4305c82b3bcdc7a1a821377deff101aa007dfa8"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.789502 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"ca54beac538f6ae6973cb4eb9b4a67af143d74a149de2e45b76be91d795370e6"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.795431 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="14371dcc51f78107e65f9620009454edccc5ceff157a028a257c1b7e1dca7708" exitCode=1 Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.795468 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"b508dd1e9a5ac0729281d3a6c666b8d546c4995637382ac9002224de0b2bcd99"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.795490 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"14371dcc51f78107e65f9620009454edccc5ceff157a028a257c1b7e1dca7708"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.795501 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"65dc7266ee11a13fe2c4621a65985c46614f4e16c244282d23b7962db16a47f0"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.795509 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"8d711a9565b402e23f2bfa8b8607c93c9b1d461ca1a010915a3e04cede45ad37"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.795517 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"700112fad0f4ad91d48c44e77419088f8f3cdd322d0db821e4eac71b3672a4b2"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.795525 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"06e6ea20a6e882ef1dd4eaf6f1eff22d0cdb09cb9ba2cd2ac2f288439e8b0497"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.795533 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"27c849c644ce0516291fe192a32dc84da3fc8c003447e0320aa9dce182c1c117"} Feb 01 07:58:34 crc kubenswrapper[4835]: I0201 07:58:34.795541 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"1ef81edf87cfd7dc6d9ec352e17d00c7943bb91f54ab196b0175af87c479b6f2"} Feb 01 07:58:35 crc kubenswrapper[4835]: I0201 07:58:35.820452 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="e402fc4a3964869718aa6b942855005121c18ce735e11a2370dece42f35ad879" exitCode=1 Feb 01 07:58:35 crc kubenswrapper[4835]: I0201 07:58:35.820552 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"e402fc4a3964869718aa6b942855005121c18ce735e11a2370dece42f35ad879"} Feb 01 07:58:35 crc kubenswrapper[4835]: I0201 07:58:35.820594 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"c17fd37ba00889658805c2c386c14292b040b366890151f28e353d1695d2920d"} Feb 01 07:58:35 crc kubenswrapper[4835]: I0201 07:58:35.821380 4835 scope.go:117] "RemoveContainer" containerID="f24e7d5b54eea247b82f3883cc21f16e7aa4caa6af0dc8bcd36658dc6d2f42ef" Feb 01 07:58:35 crc kubenswrapper[4835]: I0201 07:58:35.821503 4835 scope.go:117] "RemoveContainer" containerID="880d86ec1afbb3c274c816bb68775340a96b7442c4264f02a07362912972f0ed" Feb 01 07:58:35 crc kubenswrapper[4835]: I0201 07:58:35.821620 4835 scope.go:117] "RemoveContainer" containerID="e402fc4a3964869718aa6b942855005121c18ce735e11a2370dece42f35ad879" Feb 01 07:58:35 crc kubenswrapper[4835]: I0201 07:58:35.831026 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="8d711a9565b402e23f2bfa8b8607c93c9b1d461ca1a010915a3e04cede45ad37" exitCode=1 Feb 01 07:58:35 crc kubenswrapper[4835]: I0201 07:58:35.831076 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"8d711a9565b402e23f2bfa8b8607c93c9b1d461ca1a010915a3e04cede45ad37"} Feb 01 07:58:35 crc kubenswrapper[4835]: I0201 07:58:35.831117 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"6dfca8ce2d35261ebf5b46bba676bf2fb2d120fc3e4f9aa076139877c2d73727"} Feb 01 07:58:35 crc kubenswrapper[4835]: I0201 07:58:35.835338 4835 scope.go:117] "RemoveContainer" containerID="a91c7a061710f366c74c3a85530795267a7148635dce19cc596c818cd545af65" Feb 01 07:58:35 crc kubenswrapper[4835]: I0201 07:58:35.835942 4835 scope.go:117] "RemoveContainer" containerID="14371dcc51f78107e65f9620009454edccc5ceff157a028a257c1b7e1dca7708" Feb 01 07:58:35 crc kubenswrapper[4835]: I0201 07:58:35.836162 4835 scope.go:117] "RemoveContainer" containerID="8d711a9565b402e23f2bfa8b8607c93c9b1d461ca1a010915a3e04cede45ad37" Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.861015 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="3a767af05b048879803243c25351204dd65f0b109ee99b6cd9f8634468705cdf" exitCode=1 Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.861380 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="16296fde47387124d48bc32da647a9e20f77daf7b849305b55b16d2a894462eb" exitCode=1 Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.861083 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"9ba45f9f1b80a6d656b66bc96abf184434dcb51ab0db80ef051a87d6d94cd0a6"} Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.861575 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"3a767af05b048879803243c25351204dd65f0b109ee99b6cd9f8634468705cdf"} Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.861617 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"16296fde47387124d48bc32da647a9e20f77daf7b849305b55b16d2a894462eb"} Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.861649 4835 scope.go:117] "RemoveContainer" containerID="14371dcc51f78107e65f9620009454edccc5ceff157a028a257c1b7e1dca7708" Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.864439 4835 scope.go:117] "RemoveContainer" containerID="16296fde47387124d48bc32da647a9e20f77daf7b849305b55b16d2a894462eb" Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.864650 4835 scope.go:117] "RemoveContainer" containerID="3a767af05b048879803243c25351204dd65f0b109ee99b6cd9f8634468705cdf" Feb 01 07:58:36 crc kubenswrapper[4835]: E0201 07:58:36.865504 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.873798 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="28e6f28289f183b2c5c76f8a8aba3a65e10957d3ab1c4f856ad5f31bae944ec9" exitCode=1 Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.873833 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="02f4e087f95ab22156276839267ba910ea051aa2dbd05679bb92dec6c69321fc" exitCode=1 Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.873844 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="24fbd046a34a1b9d6428d2c5efec8ae997587bfdf02917319174fda0edd686ee" exitCode=1 Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.873868 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"28e6f28289f183b2c5c76f8a8aba3a65e10957d3ab1c4f856ad5f31bae944ec9"} Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.873898 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"02f4e087f95ab22156276839267ba910ea051aa2dbd05679bb92dec6c69321fc"} Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.873912 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"24fbd046a34a1b9d6428d2c5efec8ae997587bfdf02917319174fda0edd686ee"} Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.874978 4835 scope.go:117] "RemoveContainer" containerID="24fbd046a34a1b9d6428d2c5efec8ae997587bfdf02917319174fda0edd686ee" Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.875131 4835 scope.go:117] "RemoveContainer" containerID="02f4e087f95ab22156276839267ba910ea051aa2dbd05679bb92dec6c69321fc" Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.875562 4835 scope.go:117] "RemoveContainer" containerID="28e6f28289f183b2c5c76f8a8aba3a65e10957d3ab1c4f856ad5f31bae944ec9" Feb 01 07:58:36 crc kubenswrapper[4835]: E0201 07:58:36.876099 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 07:58:36 crc kubenswrapper[4835]: I0201 07:58:36.940379 4835 scope.go:117] "RemoveContainer" containerID="a91c7a061710f366c74c3a85530795267a7148635dce19cc596c818cd545af65" Feb 01 07:58:37 crc kubenswrapper[4835]: I0201 07:58:37.004722 4835 scope.go:117] "RemoveContainer" containerID="e402fc4a3964869718aa6b942855005121c18ce735e11a2370dece42f35ad879" Feb 01 07:58:37 crc kubenswrapper[4835]: I0201 07:58:37.061731 4835 scope.go:117] "RemoveContainer" containerID="880d86ec1afbb3c274c816bb68775340a96b7442c4264f02a07362912972f0ed" Feb 01 07:58:37 crc kubenswrapper[4835]: I0201 07:58:37.118544 4835 scope.go:117] "RemoveContainer" containerID="f24e7d5b54eea247b82f3883cc21f16e7aa4caa6af0dc8bcd36658dc6d2f42ef" Feb 01 07:58:37 crc kubenswrapper[4835]: I0201 07:58:37.891004 4835 scope.go:117] "RemoveContainer" containerID="24fbd046a34a1b9d6428d2c5efec8ae997587bfdf02917319174fda0edd686ee" Feb 01 07:58:37 crc kubenswrapper[4835]: I0201 07:58:37.891528 4835 scope.go:117] "RemoveContainer" containerID="02f4e087f95ab22156276839267ba910ea051aa2dbd05679bb92dec6c69321fc" Feb 01 07:58:37 crc kubenswrapper[4835]: I0201 07:58:37.891750 4835 scope.go:117] "RemoveContainer" containerID="28e6f28289f183b2c5c76f8a8aba3a65e10957d3ab1c4f856ad5f31bae944ec9" Feb 01 07:58:37 crc kubenswrapper[4835]: E0201 07:58:37.892237 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 07:58:37 crc kubenswrapper[4835]: I0201 07:58:37.900392 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="9ba45f9f1b80a6d656b66bc96abf184434dcb51ab0db80ef051a87d6d94cd0a6" exitCode=1 Feb 01 07:58:37 crc kubenswrapper[4835]: I0201 07:58:37.900489 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"9ba45f9f1b80a6d656b66bc96abf184434dcb51ab0db80ef051a87d6d94cd0a6"} Feb 01 07:58:37 crc kubenswrapper[4835]: I0201 07:58:37.900537 4835 scope.go:117] "RemoveContainer" containerID="8d711a9565b402e23f2bfa8b8607c93c9b1d461ca1a010915a3e04cede45ad37" Feb 01 07:58:37 crc kubenswrapper[4835]: I0201 07:58:37.901401 4835 scope.go:117] "RemoveContainer" containerID="16296fde47387124d48bc32da647a9e20f77daf7b849305b55b16d2a894462eb" Feb 01 07:58:37 crc kubenswrapper[4835]: I0201 07:58:37.901955 4835 scope.go:117] "RemoveContainer" containerID="3a767af05b048879803243c25351204dd65f0b109ee99b6cd9f8634468705cdf" Feb 01 07:58:37 crc kubenswrapper[4835]: I0201 07:58:37.902195 4835 scope.go:117] "RemoveContainer" containerID="9ba45f9f1b80a6d656b66bc96abf184434dcb51ab0db80ef051a87d6d94cd0a6" Feb 01 07:58:37 crc kubenswrapper[4835]: E0201 07:58:37.902730 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 07:58:38 crc kubenswrapper[4835]: I0201 07:58:38.920959 4835 scope.go:117] "RemoveContainer" containerID="16296fde47387124d48bc32da647a9e20f77daf7b849305b55b16d2a894462eb" Feb 01 07:58:38 crc kubenswrapper[4835]: I0201 07:58:38.921702 4835 scope.go:117] "RemoveContainer" containerID="3a767af05b048879803243c25351204dd65f0b109ee99b6cd9f8634468705cdf" Feb 01 07:58:38 crc kubenswrapper[4835]: I0201 07:58:38.921920 4835 scope.go:117] "RemoveContainer" containerID="9ba45f9f1b80a6d656b66bc96abf184434dcb51ab0db80ef051a87d6d94cd0a6" Feb 01 07:58:38 crc kubenswrapper[4835]: E0201 07:58:38.922538 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 07:58:40 crc kubenswrapper[4835]: I0201 07:58:40.566472 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:58:40 crc kubenswrapper[4835]: I0201 07:58:40.567078 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:58:40 crc kubenswrapper[4835]: I0201 07:58:40.567205 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:58:40 crc kubenswrapper[4835]: I0201 07:58:40.567256 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:58:40 crc kubenswrapper[4835]: E0201 07:58:40.567604 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:58:40 crc kubenswrapper[4835]: E0201 07:58:40.567709 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:58:46 crc kubenswrapper[4835]: I0201 07:58:46.000349 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="ec7f7a60f01d2f831b0a1a2281275328733630897c0d8daf5f2c4b53f8d649e9" exitCode=1 Feb 01 07:58:46 crc kubenswrapper[4835]: I0201 07:58:46.000444 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"ec7f7a60f01d2f831b0a1a2281275328733630897c0d8daf5f2c4b53f8d649e9"} Feb 01 07:58:46 crc kubenswrapper[4835]: I0201 07:58:46.001177 4835 scope.go:117] "RemoveContainer" containerID="4ba11c9f6be15acd5d3543ccf13bbfa830ab68fbb85b3cdf2888e5b0e15b8758" Feb 01 07:58:46 crc kubenswrapper[4835]: I0201 07:58:46.002846 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:58:46 crc kubenswrapper[4835]: I0201 07:58:46.002967 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:58:46 crc kubenswrapper[4835]: I0201 07:58:46.003011 4835 scope.go:117] "RemoveContainer" containerID="ec7f7a60f01d2f831b0a1a2281275328733630897c0d8daf5f2c4b53f8d649e9" Feb 01 07:58:46 crc kubenswrapper[4835]: I0201 07:58:46.003192 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:58:46 crc kubenswrapper[4835]: I0201 07:58:46.003271 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:58:46 crc kubenswrapper[4835]: E0201 07:58:46.003817 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:58:48 crc kubenswrapper[4835]: I0201 07:58:48.040206 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="b508dd1e9a5ac0729281d3a6c666b8d546c4995637382ac9002224de0b2bcd99" exitCode=1 Feb 01 07:58:48 crc kubenswrapper[4835]: I0201 07:58:48.040835 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"b508dd1e9a5ac0729281d3a6c666b8d546c4995637382ac9002224de0b2bcd99"} Feb 01 07:58:48 crc kubenswrapper[4835]: I0201 07:58:48.041871 4835 scope.go:117] "RemoveContainer" containerID="16296fde47387124d48bc32da647a9e20f77daf7b849305b55b16d2a894462eb" Feb 01 07:58:48 crc kubenswrapper[4835]: I0201 07:58:48.042009 4835 scope.go:117] "RemoveContainer" containerID="3a767af05b048879803243c25351204dd65f0b109ee99b6cd9f8634468705cdf" Feb 01 07:58:48 crc kubenswrapper[4835]: I0201 07:58:48.042057 4835 scope.go:117] "RemoveContainer" containerID="b508dd1e9a5ac0729281d3a6c666b8d546c4995637382ac9002224de0b2bcd99" Feb 01 07:58:48 crc kubenswrapper[4835]: I0201 07:58:48.042225 4835 scope.go:117] "RemoveContainer" containerID="9ba45f9f1b80a6d656b66bc96abf184434dcb51ab0db80ef051a87d6d94cd0a6" Feb 01 07:58:49 crc kubenswrapper[4835]: I0201 07:58:49.072488 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="8a7d988e4fead16480e78143f90fb219f1ec996d2f4eb08c3871590486cb42df" exitCode=1 Feb 01 07:58:49 crc kubenswrapper[4835]: I0201 07:58:49.073023 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"ce1f0f34a241cb27a3224b9b9bae0ad10e5aec6ab1646b0b75ce2c43459f2cac"} Feb 01 07:58:49 crc kubenswrapper[4835]: I0201 07:58:49.073096 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"6b13e362c79ee7da812063d3725213416d72ec13aecff7de5df3b32c3456d592"} Feb 01 07:58:49 crc kubenswrapper[4835]: I0201 07:58:49.073109 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"8a7d988e4fead16480e78143f90fb219f1ec996d2f4eb08c3871590486cb42df"} Feb 01 07:58:49 crc kubenswrapper[4835]: I0201 07:58:49.073128 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"5c9f183d14fb01deac8350a67dd261ad7f54e4fac26110703f8a0107aaacd47a"} Feb 01 07:58:49 crc kubenswrapper[4835]: I0201 07:58:49.073152 4835 scope.go:117] "RemoveContainer" containerID="3a767af05b048879803243c25351204dd65f0b109ee99b6cd9f8634468705cdf" Feb 01 07:58:49 crc kubenswrapper[4835]: I0201 07:58:49.073048 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="5c9f183d14fb01deac8350a67dd261ad7f54e4fac26110703f8a0107aaacd47a" exitCode=1 Feb 01 07:58:49 crc kubenswrapper[4835]: I0201 07:58:49.074228 4835 scope.go:117] "RemoveContainer" containerID="5c9f183d14fb01deac8350a67dd261ad7f54e4fac26110703f8a0107aaacd47a" Feb 01 07:58:49 crc kubenswrapper[4835]: I0201 07:58:49.074316 4835 scope.go:117] "RemoveContainer" containerID="8a7d988e4fead16480e78143f90fb219f1ec996d2f4eb08c3871590486cb42df" Feb 01 07:58:49 crc kubenswrapper[4835]: E0201 07:58:49.074722 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 07:58:49 crc kubenswrapper[4835]: I0201 07:58:49.152726 4835 scope.go:117] "RemoveContainer" containerID="16296fde47387124d48bc32da647a9e20f77daf7b849305b55b16d2a894462eb" Feb 01 07:58:50 crc kubenswrapper[4835]: I0201 07:58:50.100448 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="ce1f0f34a241cb27a3224b9b9bae0ad10e5aec6ab1646b0b75ce2c43459f2cac" exitCode=1 Feb 01 07:58:50 crc kubenswrapper[4835]: I0201 07:58:50.100517 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"ce1f0f34a241cb27a3224b9b9bae0ad10e5aec6ab1646b0b75ce2c43459f2cac"} Feb 01 07:58:50 crc kubenswrapper[4835]: I0201 07:58:50.100618 4835 scope.go:117] "RemoveContainer" containerID="9ba45f9f1b80a6d656b66bc96abf184434dcb51ab0db80ef051a87d6d94cd0a6" Feb 01 07:58:50 crc kubenswrapper[4835]: I0201 07:58:50.101706 4835 scope.go:117] "RemoveContainer" containerID="5c9f183d14fb01deac8350a67dd261ad7f54e4fac26110703f8a0107aaacd47a" Feb 01 07:58:50 crc kubenswrapper[4835]: I0201 07:58:50.101840 4835 scope.go:117] "RemoveContainer" containerID="8a7d988e4fead16480e78143f90fb219f1ec996d2f4eb08c3871590486cb42df" Feb 01 07:58:50 crc kubenswrapper[4835]: I0201 07:58:50.102045 4835 scope.go:117] "RemoveContainer" containerID="ce1f0f34a241cb27a3224b9b9bae0ad10e5aec6ab1646b0b75ce2c43459f2cac" Feb 01 07:58:50 crc kubenswrapper[4835]: E0201 07:58:50.102639 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 07:58:51 crc kubenswrapper[4835]: I0201 07:58:51.566961 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:58:51 crc kubenswrapper[4835]: I0201 07:58:51.567266 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:58:51 crc kubenswrapper[4835]: E0201 07:58:51.567519 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:58:52 crc kubenswrapper[4835]: I0201 07:58:52.568029 4835 scope.go:117] "RemoveContainer" containerID="24fbd046a34a1b9d6428d2c5efec8ae997587bfdf02917319174fda0edd686ee" Feb 01 07:58:52 crc kubenswrapper[4835]: I0201 07:58:52.568156 4835 scope.go:117] "RemoveContainer" containerID="02f4e087f95ab22156276839267ba910ea051aa2dbd05679bb92dec6c69321fc" Feb 01 07:58:52 crc kubenswrapper[4835]: I0201 07:58:52.568359 4835 scope.go:117] "RemoveContainer" containerID="28e6f28289f183b2c5c76f8a8aba3a65e10957d3ab1c4f856ad5f31bae944ec9" Feb 01 07:58:53 crc kubenswrapper[4835]: I0201 07:58:53.149294 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="e260d39dc564febe46a9955f5d13a70dc5d82a8d16a7615e31839b708397e999" exitCode=1 Feb 01 07:58:53 crc kubenswrapper[4835]: I0201 07:58:53.149522 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"a956e1902623997e4d8f074a7de472c5a8a021971e3428cb3e73c2d230a780b2"} Feb 01 07:58:53 crc kubenswrapper[4835]: I0201 07:58:53.149616 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"eea22ed64b294abc55860e81d793079539f7bd8406e2db714a48af460ef4679e"} Feb 01 07:58:53 crc kubenswrapper[4835]: I0201 07:58:53.149629 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"e260d39dc564febe46a9955f5d13a70dc5d82a8d16a7615e31839b708397e999"} Feb 01 07:58:53 crc kubenswrapper[4835]: I0201 07:58:53.149652 4835 scope.go:117] "RemoveContainer" containerID="24fbd046a34a1b9d6428d2c5efec8ae997587bfdf02917319174fda0edd686ee" Feb 01 07:58:53 crc kubenswrapper[4835]: I0201 07:58:53.196834 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:58:53 crc kubenswrapper[4835]: E0201 07:58:53.196952 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 07:58:53 crc kubenswrapper[4835]: E0201 07:58:53.197244 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 08:00:55.197227764 +0000 UTC m=+2328.317664188 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 07:58:54 crc kubenswrapper[4835]: I0201 07:58:54.169714 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="a956e1902623997e4d8f074a7de472c5a8a021971e3428cb3e73c2d230a780b2" exitCode=1 Feb 01 07:58:54 crc kubenswrapper[4835]: I0201 07:58:54.169887 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="eea22ed64b294abc55860e81d793079539f7bd8406e2db714a48af460ef4679e" exitCode=1 Feb 01 07:58:54 crc kubenswrapper[4835]: I0201 07:58:54.169916 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"a956e1902623997e4d8f074a7de472c5a8a021971e3428cb3e73c2d230a780b2"} Feb 01 07:58:54 crc kubenswrapper[4835]: I0201 07:58:54.169954 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"eea22ed64b294abc55860e81d793079539f7bd8406e2db714a48af460ef4679e"} Feb 01 07:58:54 crc kubenswrapper[4835]: I0201 07:58:54.169984 4835 scope.go:117] "RemoveContainer" containerID="28e6f28289f183b2c5c76f8a8aba3a65e10957d3ab1c4f856ad5f31bae944ec9" Feb 01 07:58:54 crc kubenswrapper[4835]: I0201 07:58:54.170956 4835 scope.go:117] "RemoveContainer" containerID="e260d39dc564febe46a9955f5d13a70dc5d82a8d16a7615e31839b708397e999" Feb 01 07:58:54 crc kubenswrapper[4835]: I0201 07:58:54.171092 4835 scope.go:117] "RemoveContainer" containerID="eea22ed64b294abc55860e81d793079539f7bd8406e2db714a48af460ef4679e" Feb 01 07:58:54 crc kubenswrapper[4835]: I0201 07:58:54.171307 4835 scope.go:117] "RemoveContainer" containerID="a956e1902623997e4d8f074a7de472c5a8a021971e3428cb3e73c2d230a780b2" Feb 01 07:58:54 crc kubenswrapper[4835]: E0201 07:58:54.172188 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 07:58:54 crc kubenswrapper[4835]: I0201 07:58:54.255531 4835 scope.go:117] "RemoveContainer" containerID="02f4e087f95ab22156276839267ba910ea051aa2dbd05679bb92dec6c69321fc" Feb 01 07:58:55 crc kubenswrapper[4835]: I0201 07:58:55.187405 4835 scope.go:117] "RemoveContainer" containerID="e260d39dc564febe46a9955f5d13a70dc5d82a8d16a7615e31839b708397e999" Feb 01 07:58:55 crc kubenswrapper[4835]: I0201 07:58:55.187593 4835 scope.go:117] "RemoveContainer" containerID="eea22ed64b294abc55860e81d793079539f7bd8406e2db714a48af460ef4679e" Feb 01 07:58:55 crc kubenswrapper[4835]: I0201 07:58:55.187817 4835 scope.go:117] "RemoveContainer" containerID="a956e1902623997e4d8f074a7de472c5a8a021971e3428cb3e73c2d230a780b2" Feb 01 07:58:55 crc kubenswrapper[4835]: E0201 07:58:55.188280 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 07:58:55 crc kubenswrapper[4835]: I0201 07:58:55.191382 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 07:58:55 crc kubenswrapper[4835]: I0201 07:58:55.191469 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 07:58:55 crc kubenswrapper[4835]: I0201 07:58:55.191525 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 07:58:55 crc kubenswrapper[4835]: I0201 07:58:55.192177 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8"} pod="openshift-machine-config-operator/machine-config-daemon-wdt78" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 07:58:55 crc kubenswrapper[4835]: I0201 07:58:55.192278 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" containerID="cri-o://3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" gracePeriod=600 Feb 01 07:58:55 crc kubenswrapper[4835]: E0201 07:58:55.320526 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:58:55 crc kubenswrapper[4835]: I0201 07:58:55.566970 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:58:55 crc kubenswrapper[4835]: I0201 07:58:55.567001 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:58:55 crc kubenswrapper[4835]: E0201 07:58:55.567282 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:58:56 crc kubenswrapper[4835]: I0201 07:58:56.214338 4835 generic.go:334] "Generic (PLEG): container finished" podID="303c450e-4b2d-4908-84e6-df8b444ed640" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" exitCode=0 Feb 01 07:58:56 crc kubenswrapper[4835]: I0201 07:58:56.214461 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerDied","Data":"3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8"} Feb 01 07:58:56 crc kubenswrapper[4835]: I0201 07:58:56.214949 4835 scope.go:117] "RemoveContainer" containerID="d638555a7804d9b2393754d14295137aca5e115889b061826bbd0511ac275ab7" Feb 01 07:58:56 crc kubenswrapper[4835]: I0201 07:58:56.215889 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 07:58:56 crc kubenswrapper[4835]: E0201 07:58:56.216382 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:58:57 crc kubenswrapper[4835]: I0201 07:58:57.579017 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:58:57 crc kubenswrapper[4835]: I0201 07:58:57.579209 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:58:57 crc kubenswrapper[4835]: I0201 07:58:57.579311 4835 scope.go:117] "RemoveContainer" containerID="ec7f7a60f01d2f831b0a1a2281275328733630897c0d8daf5f2c4b53f8d649e9" Feb 01 07:58:57 crc kubenswrapper[4835]: I0201 07:58:57.579515 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:58:57 crc kubenswrapper[4835]: I0201 07:58:57.579618 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:58:57 crc kubenswrapper[4835]: E0201 07:58:57.580331 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:59:00 crc kubenswrapper[4835]: E0201 07:59:00.355327 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 07:59:01 crc kubenswrapper[4835]: I0201 07:59:01.263276 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 07:59:04 crc kubenswrapper[4835]: I0201 07:59:04.567319 4835 scope.go:117] "RemoveContainer" containerID="5c9f183d14fb01deac8350a67dd261ad7f54e4fac26110703f8a0107aaacd47a" Feb 01 07:59:04 crc kubenswrapper[4835]: I0201 07:59:04.567830 4835 scope.go:117] "RemoveContainer" containerID="8a7d988e4fead16480e78143f90fb219f1ec996d2f4eb08c3871590486cb42df" Feb 01 07:59:04 crc kubenswrapper[4835]: I0201 07:59:04.568034 4835 scope.go:117] "RemoveContainer" containerID="ce1f0f34a241cb27a3224b9b9bae0ad10e5aec6ab1646b0b75ce2c43459f2cac" Feb 01 07:59:04 crc kubenswrapper[4835]: E0201 07:59:04.568544 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 07:59:05 crc kubenswrapper[4835]: I0201 07:59:05.566622 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:59:05 crc kubenswrapper[4835]: I0201 07:59:05.566668 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:59:05 crc kubenswrapper[4835]: E0201 07:59:05.567046 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:59:05 crc kubenswrapper[4835]: I0201 07:59:05.568060 4835 scope.go:117] "RemoveContainer" containerID="e260d39dc564febe46a9955f5d13a70dc5d82a8d16a7615e31839b708397e999" Feb 01 07:59:05 crc kubenswrapper[4835]: I0201 07:59:05.568185 4835 scope.go:117] "RemoveContainer" containerID="eea22ed64b294abc55860e81d793079539f7bd8406e2db714a48af460ef4679e" Feb 01 07:59:05 crc kubenswrapper[4835]: I0201 07:59:05.568360 4835 scope.go:117] "RemoveContainer" containerID="a956e1902623997e4d8f074a7de472c5a8a021971e3428cb3e73c2d230a780b2" Feb 01 07:59:05 crc kubenswrapper[4835]: E0201 07:59:05.568940 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 07:59:07 crc kubenswrapper[4835]: I0201 07:59:07.577220 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:59:07 crc kubenswrapper[4835]: I0201 07:59:07.577253 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:59:07 crc kubenswrapper[4835]: E0201 07:59:07.577498 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:59:09 crc kubenswrapper[4835]: I0201 07:59:09.568380 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 07:59:09 crc kubenswrapper[4835]: E0201 07:59:09.568797 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:59:12 crc kubenswrapper[4835]: I0201 07:59:12.568697 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:59:12 crc kubenswrapper[4835]: I0201 07:59:12.569135 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:59:12 crc kubenswrapper[4835]: I0201 07:59:12.569185 4835 scope.go:117] "RemoveContainer" containerID="ec7f7a60f01d2f831b0a1a2281275328733630897c0d8daf5f2c4b53f8d649e9" Feb 01 07:59:12 crc kubenswrapper[4835]: I0201 07:59:12.569315 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:59:12 crc kubenswrapper[4835]: I0201 07:59:12.569393 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:59:12 crc kubenswrapper[4835]: E0201 07:59:12.570204 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:59:16 crc kubenswrapper[4835]: I0201 07:59:16.567410 4835 scope.go:117] "RemoveContainer" containerID="5c9f183d14fb01deac8350a67dd261ad7f54e4fac26110703f8a0107aaacd47a" Feb 01 07:59:16 crc kubenswrapper[4835]: I0201 07:59:16.567834 4835 scope.go:117] "RemoveContainer" containerID="8a7d988e4fead16480e78143f90fb219f1ec996d2f4eb08c3871590486cb42df" Feb 01 07:59:16 crc kubenswrapper[4835]: I0201 07:59:16.567919 4835 scope.go:117] "RemoveContainer" containerID="ce1f0f34a241cb27a3224b9b9bae0ad10e5aec6ab1646b0b75ce2c43459f2cac" Feb 01 07:59:17 crc kubenswrapper[4835]: I0201 07:59:17.438891 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="423aa0b4aff41f70a2984d1ef0c8d0e0175795d49a51097d89b32c133422941e" exitCode=1 Feb 01 07:59:17 crc kubenswrapper[4835]: I0201 07:59:17.439233 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"c2d56c28efba2b119273e905106a885bf6c8c70cec0b835aea9fe74b9ae37fd6"} Feb 01 07:59:17 crc kubenswrapper[4835]: I0201 07:59:17.439273 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"73ec1f336936452627c4a8e9c497190b4ad0915844d7b342a988b90047ad4972"} Feb 01 07:59:17 crc kubenswrapper[4835]: I0201 07:59:17.439294 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"423aa0b4aff41f70a2984d1ef0c8d0e0175795d49a51097d89b32c133422941e"} Feb 01 07:59:17 crc kubenswrapper[4835]: I0201 07:59:17.439326 4835 scope.go:117] "RemoveContainer" containerID="5c9f183d14fb01deac8350a67dd261ad7f54e4fac26110703f8a0107aaacd47a" Feb 01 07:59:17 crc kubenswrapper[4835]: I0201 07:59:17.440322 4835 scope.go:117] "RemoveContainer" containerID="423aa0b4aff41f70a2984d1ef0c8d0e0175795d49a51097d89b32c133422941e" Feb 01 07:59:17 crc kubenswrapper[4835]: E0201 07:59:17.441571 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 07:59:17 crc kubenswrapper[4835]: I0201 07:59:17.571293 4835 scope.go:117] "RemoveContainer" containerID="e260d39dc564febe46a9955f5d13a70dc5d82a8d16a7615e31839b708397e999" Feb 01 07:59:17 crc kubenswrapper[4835]: I0201 07:59:17.571364 4835 scope.go:117] "RemoveContainer" containerID="eea22ed64b294abc55860e81d793079539f7bd8406e2db714a48af460ef4679e" Feb 01 07:59:17 crc kubenswrapper[4835]: I0201 07:59:17.571473 4835 scope.go:117] "RemoveContainer" containerID="a956e1902623997e4d8f074a7de472c5a8a021971e3428cb3e73c2d230a780b2" Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.458033 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="c2d56c28efba2b119273e905106a885bf6c8c70cec0b835aea9fe74b9ae37fd6" exitCode=1 Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.458370 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="73ec1f336936452627c4a8e9c497190b4ad0915844d7b342a988b90047ad4972" exitCode=1 Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.458237 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"c2d56c28efba2b119273e905106a885bf6c8c70cec0b835aea9fe74b9ae37fd6"} Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.458449 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"73ec1f336936452627c4a8e9c497190b4ad0915844d7b342a988b90047ad4972"} Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.458471 4835 scope.go:117] "RemoveContainer" containerID="ce1f0f34a241cb27a3224b9b9bae0ad10e5aec6ab1646b0b75ce2c43459f2cac" Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.459044 4835 scope.go:117] "RemoveContainer" containerID="423aa0b4aff41f70a2984d1ef0c8d0e0175795d49a51097d89b32c133422941e" Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.459107 4835 scope.go:117] "RemoveContainer" containerID="73ec1f336936452627c4a8e9c497190b4ad0915844d7b342a988b90047ad4972" Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.459214 4835 scope.go:117] "RemoveContainer" containerID="c2d56c28efba2b119273e905106a885bf6c8c70cec0b835aea9fe74b9ae37fd6" Feb 01 07:59:18 crc kubenswrapper[4835]: E0201 07:59:18.459568 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.474610 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="6f6f9fd3f963aaf7df290a2d825d0aa805464bef1b53143c74d5d8787df0b41e" exitCode=1 Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.474684 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"7bd881ed8964128da50b3db280e449aa587ee47d14f89728ca2728626a79a477"} Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.474725 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"2a58efc23acee73d22ccbe082a09919def8f9135b5ca1d0f04147837777729f0"} Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.474744 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"6f6f9fd3f963aaf7df290a2d825d0aa805464bef1b53143c74d5d8787df0b41e"} Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.476335 4835 scope.go:117] "RemoveContainer" containerID="6f6f9fd3f963aaf7df290a2d825d0aa805464bef1b53143c74d5d8787df0b41e" Feb 01 07:59:18 crc kubenswrapper[4835]: E0201 07:59:18.476890 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.530331 4835 scope.go:117] "RemoveContainer" containerID="8a7d988e4fead16480e78143f90fb219f1ec996d2f4eb08c3871590486cb42df" Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.566782 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.566839 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.566970 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.567007 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:59:18 crc kubenswrapper[4835]: E0201 07:59:18.567222 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:59:18 crc kubenswrapper[4835]: E0201 07:59:18.567398 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:59:18 crc kubenswrapper[4835]: I0201 07:59:18.588062 4835 scope.go:117] "RemoveContainer" containerID="e260d39dc564febe46a9955f5d13a70dc5d82a8d16a7615e31839b708397e999" Feb 01 07:59:19 crc kubenswrapper[4835]: I0201 07:59:19.490980 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="7bd881ed8964128da50b3db280e449aa587ee47d14f89728ca2728626a79a477" exitCode=1 Feb 01 07:59:19 crc kubenswrapper[4835]: I0201 07:59:19.491012 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="2a58efc23acee73d22ccbe082a09919def8f9135b5ca1d0f04147837777729f0" exitCode=1 Feb 01 07:59:19 crc kubenswrapper[4835]: I0201 07:59:19.491078 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"7bd881ed8964128da50b3db280e449aa587ee47d14f89728ca2728626a79a477"} Feb 01 07:59:19 crc kubenswrapper[4835]: I0201 07:59:19.491153 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"2a58efc23acee73d22ccbe082a09919def8f9135b5ca1d0f04147837777729f0"} Feb 01 07:59:19 crc kubenswrapper[4835]: I0201 07:59:19.491178 4835 scope.go:117] "RemoveContainer" containerID="a956e1902623997e4d8f074a7de472c5a8a021971e3428cb3e73c2d230a780b2" Feb 01 07:59:19 crc kubenswrapper[4835]: I0201 07:59:19.491800 4835 scope.go:117] "RemoveContainer" containerID="6f6f9fd3f963aaf7df290a2d825d0aa805464bef1b53143c74d5d8787df0b41e" Feb 01 07:59:19 crc kubenswrapper[4835]: I0201 07:59:19.491867 4835 scope.go:117] "RemoveContainer" containerID="2a58efc23acee73d22ccbe082a09919def8f9135b5ca1d0f04147837777729f0" Feb 01 07:59:19 crc kubenswrapper[4835]: I0201 07:59:19.491961 4835 scope.go:117] "RemoveContainer" containerID="7bd881ed8964128da50b3db280e449aa587ee47d14f89728ca2728626a79a477" Feb 01 07:59:19 crc kubenswrapper[4835]: E0201 07:59:19.492226 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 07:59:19 crc kubenswrapper[4835]: I0201 07:59:19.539307 4835 scope.go:117] "RemoveContainer" containerID="eea22ed64b294abc55860e81d793079539f7bd8406e2db714a48af460ef4679e" Feb 01 07:59:20 crc kubenswrapper[4835]: I0201 07:59:20.522209 4835 scope.go:117] "RemoveContainer" containerID="6f6f9fd3f963aaf7df290a2d825d0aa805464bef1b53143c74d5d8787df0b41e" Feb 01 07:59:20 crc kubenswrapper[4835]: I0201 07:59:20.523580 4835 scope.go:117] "RemoveContainer" containerID="2a58efc23acee73d22ccbe082a09919def8f9135b5ca1d0f04147837777729f0" Feb 01 07:59:20 crc kubenswrapper[4835]: I0201 07:59:20.524014 4835 scope.go:117] "RemoveContainer" containerID="7bd881ed8964128da50b3db280e449aa587ee47d14f89728ca2728626a79a477" Feb 01 07:59:20 crc kubenswrapper[4835]: E0201 07:59:20.524882 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 07:59:22 crc kubenswrapper[4835]: I0201 07:59:22.567705 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 07:59:22 crc kubenswrapper[4835]: E0201 07:59:22.568373 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:59:25 crc kubenswrapper[4835]: I0201 07:59:25.567994 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:59:25 crc kubenswrapper[4835]: I0201 07:59:25.568135 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:59:25 crc kubenswrapper[4835]: I0201 07:59:25.568180 4835 scope.go:117] "RemoveContainer" containerID="ec7f7a60f01d2f831b0a1a2281275328733630897c0d8daf5f2c4b53f8d649e9" Feb 01 07:59:25 crc kubenswrapper[4835]: I0201 07:59:25.568300 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:59:25 crc kubenswrapper[4835]: I0201 07:59:25.568365 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:59:25 crc kubenswrapper[4835]: E0201 07:59:25.569020 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:59:26 crc kubenswrapper[4835]: I0201 07:59:26.584523 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="6b13e362c79ee7da812063d3725213416d72ec13aecff7de5df3b32c3456d592" exitCode=1 Feb 01 07:59:26 crc kubenswrapper[4835]: I0201 07:59:26.584596 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"6b13e362c79ee7da812063d3725213416d72ec13aecff7de5df3b32c3456d592"} Feb 01 07:59:26 crc kubenswrapper[4835]: I0201 07:59:26.585042 4835 scope.go:117] "RemoveContainer" containerID="b508dd1e9a5ac0729281d3a6c666b8d546c4995637382ac9002224de0b2bcd99" Feb 01 07:59:26 crc kubenswrapper[4835]: I0201 07:59:26.586030 4835 scope.go:117] "RemoveContainer" containerID="423aa0b4aff41f70a2984d1ef0c8d0e0175795d49a51097d89b32c133422941e" Feb 01 07:59:26 crc kubenswrapper[4835]: I0201 07:59:26.586160 4835 scope.go:117] "RemoveContainer" containerID="73ec1f336936452627c4a8e9c497190b4ad0915844d7b342a988b90047ad4972" Feb 01 07:59:26 crc kubenswrapper[4835]: I0201 07:59:26.586209 4835 scope.go:117] "RemoveContainer" containerID="6b13e362c79ee7da812063d3725213416d72ec13aecff7de5df3b32c3456d592" Feb 01 07:59:26 crc kubenswrapper[4835]: I0201 07:59:26.586357 4835 scope.go:117] "RemoveContainer" containerID="c2d56c28efba2b119273e905106a885bf6c8c70cec0b835aea9fe74b9ae37fd6" Feb 01 07:59:26 crc kubenswrapper[4835]: E0201 07:59:26.587285 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 07:59:32 crc kubenswrapper[4835]: I0201 07:59:32.567469 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:59:32 crc kubenswrapper[4835]: I0201 07:59:32.568519 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:59:32 crc kubenswrapper[4835]: E0201 07:59:32.568860 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:59:33 crc kubenswrapper[4835]: I0201 07:59:33.567012 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:59:33 crc kubenswrapper[4835]: I0201 07:59:33.567363 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:59:33 crc kubenswrapper[4835]: E0201 07:59:33.753378 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:59:34 crc kubenswrapper[4835]: I0201 07:59:34.748643 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" exitCode=1 Feb 01 07:59:34 crc kubenswrapper[4835]: I0201 07:59:34.748710 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390"} Feb 01 07:59:34 crc kubenswrapper[4835]: I0201 07:59:34.748768 4835 scope.go:117] "RemoveContainer" containerID="82a8e4c9c6b19c78fb9bf918af858bc3166a349f6f604d023c815e1baff9028f" Feb 01 07:59:34 crc kubenswrapper[4835]: I0201 07:59:34.750278 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:59:34 crc kubenswrapper[4835]: I0201 07:59:34.750367 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 07:59:34 crc kubenswrapper[4835]: E0201 07:59:34.751459 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:59:35 crc kubenswrapper[4835]: I0201 07:59:35.567033 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 07:59:35 crc kubenswrapper[4835]: E0201 07:59:35.567502 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:59:35 crc kubenswrapper[4835]: I0201 07:59:35.569155 4835 scope.go:117] "RemoveContainer" containerID="6f6f9fd3f963aaf7df290a2d825d0aa805464bef1b53143c74d5d8787df0b41e" Feb 01 07:59:35 crc kubenswrapper[4835]: I0201 07:59:35.569824 4835 scope.go:117] "RemoveContainer" containerID="2a58efc23acee73d22ccbe082a09919def8f9135b5ca1d0f04147837777729f0" Feb 01 07:59:35 crc kubenswrapper[4835]: I0201 07:59:35.570145 4835 scope.go:117] "RemoveContainer" containerID="7bd881ed8964128da50b3db280e449aa587ee47d14f89728ca2728626a79a477" Feb 01 07:59:35 crc kubenswrapper[4835]: E0201 07:59:35.570794 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 07:59:36 crc kubenswrapper[4835]: I0201 07:59:36.535207 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:59:36 crc kubenswrapper[4835]: I0201 07:59:36.536198 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:59:36 crc kubenswrapper[4835]: I0201 07:59:36.536370 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 07:59:36 crc kubenswrapper[4835]: E0201 07:59:36.537322 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:59:37 crc kubenswrapper[4835]: I0201 07:59:37.535279 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 07:59:37 crc kubenswrapper[4835]: I0201 07:59:37.536035 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:59:37 crc kubenswrapper[4835]: I0201 07:59:37.536054 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 07:59:37 crc kubenswrapper[4835]: E0201 07:59:37.536490 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:59:37 crc kubenswrapper[4835]: I0201 07:59:37.791370 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="c79ff7541114600de37a172509eea1cb11eec93c315c86aafccf0b9d756e98ea" exitCode=1 Feb 01 07:59:37 crc kubenswrapper[4835]: I0201 07:59:37.791442 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"c79ff7541114600de37a172509eea1cb11eec93c315c86aafccf0b9d756e98ea"} Feb 01 07:59:37 crc kubenswrapper[4835]: I0201 07:59:37.792202 4835 scope.go:117] "RemoveContainer" containerID="6f6f9fd3f963aaf7df290a2d825d0aa805464bef1b53143c74d5d8787df0b41e" Feb 01 07:59:37 crc kubenswrapper[4835]: I0201 07:59:37.792356 4835 scope.go:117] "RemoveContainer" containerID="2a58efc23acee73d22ccbe082a09919def8f9135b5ca1d0f04147837777729f0" Feb 01 07:59:37 crc kubenswrapper[4835]: I0201 07:59:37.792471 4835 scope.go:117] "RemoveContainer" containerID="c79ff7541114600de37a172509eea1cb11eec93c315c86aafccf0b9d756e98ea" Feb 01 07:59:37 crc kubenswrapper[4835]: I0201 07:59:37.792493 4835 scope.go:117] "RemoveContainer" containerID="7bd881ed8964128da50b3db280e449aa587ee47d14f89728ca2728626a79a477" Feb 01 07:59:38 crc kubenswrapper[4835]: E0201 07:59:38.033966 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 07:59:38 crc kubenswrapper[4835]: I0201 07:59:38.568284 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:59:38 crc kubenswrapper[4835]: I0201 07:59:38.568954 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:59:38 crc kubenswrapper[4835]: I0201 07:59:38.569014 4835 scope.go:117] "RemoveContainer" containerID="ec7f7a60f01d2f831b0a1a2281275328733630897c0d8daf5f2c4b53f8d649e9" Feb 01 07:59:38 crc kubenswrapper[4835]: I0201 07:59:38.569196 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:59:38 crc kubenswrapper[4835]: I0201 07:59:38.569290 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:59:38 crc kubenswrapper[4835]: E0201 07:59:38.570078 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:59:38 crc kubenswrapper[4835]: I0201 07:59:38.815550 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"7281a9d7c1d9d8dc16a17f203151e4b7970267f00d4334688eaa717a6dc5211c"} Feb 01 07:59:38 crc kubenswrapper[4835]: I0201 07:59:38.816598 4835 scope.go:117] "RemoveContainer" containerID="6f6f9fd3f963aaf7df290a2d825d0aa805464bef1b53143c74d5d8787df0b41e" Feb 01 07:59:38 crc kubenswrapper[4835]: I0201 07:59:38.816731 4835 scope.go:117] "RemoveContainer" containerID="2a58efc23acee73d22ccbe082a09919def8f9135b5ca1d0f04147837777729f0" Feb 01 07:59:38 crc kubenswrapper[4835]: I0201 07:59:38.816935 4835 scope.go:117] "RemoveContainer" containerID="7bd881ed8964128da50b3db280e449aa587ee47d14f89728ca2728626a79a477" Feb 01 07:59:38 crc kubenswrapper[4835]: E0201 07:59:38.817503 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 07:59:40 crc kubenswrapper[4835]: I0201 07:59:40.567661 4835 scope.go:117] "RemoveContainer" containerID="423aa0b4aff41f70a2984d1ef0c8d0e0175795d49a51097d89b32c133422941e" Feb 01 07:59:40 crc kubenswrapper[4835]: I0201 07:59:40.567780 4835 scope.go:117] "RemoveContainer" containerID="73ec1f336936452627c4a8e9c497190b4ad0915844d7b342a988b90047ad4972" Feb 01 07:59:40 crc kubenswrapper[4835]: I0201 07:59:40.567810 4835 scope.go:117] "RemoveContainer" containerID="6b13e362c79ee7da812063d3725213416d72ec13aecff7de5df3b32c3456d592" Feb 01 07:59:40 crc kubenswrapper[4835]: I0201 07:59:40.567913 4835 scope.go:117] "RemoveContainer" containerID="c2d56c28efba2b119273e905106a885bf6c8c70cec0b835aea9fe74b9ae37fd6" Feb 01 07:59:40 crc kubenswrapper[4835]: E0201 07:59:40.760902 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 07:59:40 crc kubenswrapper[4835]: I0201 07:59:40.842778 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"6aaadf97ef22242cf5b15148b8cd42d71eb7c275654a87f6591085d77d846827"} Feb 01 07:59:40 crc kubenswrapper[4835]: I0201 07:59:40.843611 4835 scope.go:117] "RemoveContainer" containerID="423aa0b4aff41f70a2984d1ef0c8d0e0175795d49a51097d89b32c133422941e" Feb 01 07:59:40 crc kubenswrapper[4835]: I0201 07:59:40.843788 4835 scope.go:117] "RemoveContainer" containerID="73ec1f336936452627c4a8e9c497190b4ad0915844d7b342a988b90047ad4972" Feb 01 07:59:40 crc kubenswrapper[4835]: I0201 07:59:40.843966 4835 scope.go:117] "RemoveContainer" containerID="c2d56c28efba2b119273e905106a885bf6c8c70cec0b835aea9fe74b9ae37fd6" Feb 01 07:59:40 crc kubenswrapper[4835]: E0201 07:59:40.844579 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 07:59:47 crc kubenswrapper[4835]: I0201 07:59:47.573241 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:59:47 crc kubenswrapper[4835]: I0201 07:59:47.573868 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:59:47 crc kubenswrapper[4835]: E0201 07:59:47.736345 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:59:47 crc kubenswrapper[4835]: I0201 07:59:47.914489 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d"} Feb 01 07:59:47 crc kubenswrapper[4835]: I0201 07:59:47.915049 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:59:47 crc kubenswrapper[4835]: I0201 07:59:47.915399 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:59:47 crc kubenswrapper[4835]: E0201 07:59:47.915867 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:59:48 crc kubenswrapper[4835]: I0201 07:59:48.567925 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 07:59:48 crc kubenswrapper[4835]: I0201 07:59:48.568326 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 07:59:48 crc kubenswrapper[4835]: E0201 07:59:48.568706 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 07:59:48 crc kubenswrapper[4835]: I0201 07:59:48.929197 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" exitCode=1 Feb 01 07:59:48 crc kubenswrapper[4835]: I0201 07:59:48.929279 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d"} Feb 01 07:59:48 crc kubenswrapper[4835]: I0201 07:59:48.929357 4835 scope.go:117] "RemoveContainer" containerID="d6870e1d4b05abcf0b327a967c26cdf5295bd4e946ad6f1233fad69c0976cd11" Feb 01 07:59:48 crc kubenswrapper[4835]: I0201 07:59:48.931237 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:59:48 crc kubenswrapper[4835]: I0201 07:59:48.931294 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 07:59:48 crc kubenswrapper[4835]: E0201 07:59:48.932211 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:59:49 crc kubenswrapper[4835]: I0201 07:59:49.019245 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 07:59:49 crc kubenswrapper[4835]: I0201 07:59:49.941142 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:59:49 crc kubenswrapper[4835]: I0201 07:59:49.941175 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 07:59:49 crc kubenswrapper[4835]: E0201 07:59:49.941521 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:59:50 crc kubenswrapper[4835]: I0201 07:59:50.567610 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 07:59:50 crc kubenswrapper[4835]: E0201 07:59:50.568000 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 07:59:50 crc kubenswrapper[4835]: I0201 07:59:50.568028 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:59:50 crc kubenswrapper[4835]: I0201 07:59:50.568139 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:59:50 crc kubenswrapper[4835]: I0201 07:59:50.568182 4835 scope.go:117] "RemoveContainer" containerID="ec7f7a60f01d2f831b0a1a2281275328733630897c0d8daf5f2c4b53f8d649e9" Feb 01 07:59:50 crc kubenswrapper[4835]: I0201 07:59:50.568302 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:59:50 crc kubenswrapper[4835]: I0201 07:59:50.568398 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:59:50 crc kubenswrapper[4835]: I0201 07:59:50.953504 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b"} Feb 01 07:59:50 crc kubenswrapper[4835]: I0201 07:59:50.953980 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 07:59:50 crc kubenswrapper[4835]: I0201 07:59:50.953999 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 07:59:50 crc kubenswrapper[4835]: E0201 07:59:50.954215 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 07:59:51 crc kubenswrapper[4835]: E0201 07:59:51.358675 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.973114 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" exitCode=1 Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.973170 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" exitCode=1 Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.973181 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" exitCode=1 Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.973180 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b"} Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.973189 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" exitCode=1 Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.973248 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b"} Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.973265 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be"} Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.973277 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90"} Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.973301 4835 scope.go:117] "RemoveContainer" containerID="e90d9c77a8fd874aee99c52015e53c635f9ebd853fa023b5f045ec9455599f89" Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.974102 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.974249 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.974310 4835 scope.go:117] "RemoveContainer" containerID="ec7f7a60f01d2f831b0a1a2281275328733630897c0d8daf5f2c4b53f8d649e9" Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.974476 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 07:59:51 crc kubenswrapper[4835]: I0201 07:59:51.974563 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 07:59:51 crc kubenswrapper[4835]: E0201 07:59:51.975090 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.021023 4835 scope.go:117] "RemoveContainer" containerID="b4af1d3e8f59c8ef215f1c128ca3eaf7aa7c754a998a9de44c641d30297e5536" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.058937 4835 scope.go:117] "RemoveContainer" containerID="df202babe580c8edd052c1129f361dc29f5383074f4e24c67a039c98381ec150" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.098855 4835 scope.go:117] "RemoveContainer" containerID="9609014f7d3eef34e6d90d188a3d09a66130508eb9e51585570f08963fe4f794" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.566946 4835 scope.go:117] "RemoveContainer" containerID="6f6f9fd3f963aaf7df290a2d825d0aa805464bef1b53143c74d5d8787df0b41e" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.567031 4835 scope.go:117] "RemoveContainer" containerID="2a58efc23acee73d22ccbe082a09919def8f9135b5ca1d0f04147837777729f0" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.567134 4835 scope.go:117] "RemoveContainer" containerID="7bd881ed8964128da50b3db280e449aa587ee47d14f89728ca2728626a79a477" Feb 01 07:59:52 crc kubenswrapper[4835]: E0201 07:59:52.567461 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.567496 4835 scope.go:117] "RemoveContainer" containerID="423aa0b4aff41f70a2984d1ef0c8d0e0175795d49a51097d89b32c133422941e" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.567577 4835 scope.go:117] "RemoveContainer" containerID="73ec1f336936452627c4a8e9c497190b4ad0915844d7b342a988b90047ad4972" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.567690 4835 scope.go:117] "RemoveContainer" containerID="c2d56c28efba2b119273e905106a885bf6c8c70cec0b835aea9fe74b9ae37fd6" Feb 01 07:59:52 crc kubenswrapper[4835]: E0201 07:59:52.568021 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.991612 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.991724 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.991766 4835 scope.go:117] "RemoveContainer" containerID="ec7f7a60f01d2f831b0a1a2281275328733630897c0d8daf5f2c4b53f8d649e9" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.991873 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 07:59:52 crc kubenswrapper[4835]: I0201 07:59:52.991932 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 07:59:52 crc kubenswrapper[4835]: E0201 07:59:52.992435 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 07:59:55 crc kubenswrapper[4835]: I0201 07:59:55.609006 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4jnp6"] Feb 01 07:59:55 crc kubenswrapper[4835]: I0201 07:59:55.613042 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 07:59:55 crc kubenswrapper[4835]: I0201 07:59:55.629480 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4jnp6"] Feb 01 07:59:55 crc kubenswrapper[4835]: I0201 07:59:55.692146 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8grpf\" (UniqueName: \"kubernetes.io/projected/51ab27b9-c1c7-48b0-a4b0-185857b275e3-kube-api-access-8grpf\") pod \"certified-operators-4jnp6\" (UID: \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\") " pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 07:59:55 crc kubenswrapper[4835]: I0201 07:59:55.692589 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51ab27b9-c1c7-48b0-a4b0-185857b275e3-utilities\") pod \"certified-operators-4jnp6\" (UID: \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\") " pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 07:59:55 crc kubenswrapper[4835]: I0201 07:59:55.692749 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51ab27b9-c1c7-48b0-a4b0-185857b275e3-catalog-content\") pod \"certified-operators-4jnp6\" (UID: \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\") " pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 07:59:55 crc kubenswrapper[4835]: I0201 07:59:55.793675 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51ab27b9-c1c7-48b0-a4b0-185857b275e3-utilities\") pod \"certified-operators-4jnp6\" (UID: \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\") " pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 07:59:55 crc kubenswrapper[4835]: I0201 07:59:55.793756 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51ab27b9-c1c7-48b0-a4b0-185857b275e3-catalog-content\") pod \"certified-operators-4jnp6\" (UID: \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\") " pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 07:59:55 crc kubenswrapper[4835]: I0201 07:59:55.793892 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8grpf\" (UniqueName: \"kubernetes.io/projected/51ab27b9-c1c7-48b0-a4b0-185857b275e3-kube-api-access-8grpf\") pod \"certified-operators-4jnp6\" (UID: \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\") " pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 07:59:55 crc kubenswrapper[4835]: I0201 07:59:55.794561 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51ab27b9-c1c7-48b0-a4b0-185857b275e3-catalog-content\") pod \"certified-operators-4jnp6\" (UID: \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\") " pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 07:59:55 crc kubenswrapper[4835]: I0201 07:59:55.794592 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51ab27b9-c1c7-48b0-a4b0-185857b275e3-utilities\") pod \"certified-operators-4jnp6\" (UID: \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\") " pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 07:59:55 crc kubenswrapper[4835]: I0201 07:59:55.816844 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8grpf\" (UniqueName: \"kubernetes.io/projected/51ab27b9-c1c7-48b0-a4b0-185857b275e3-kube-api-access-8grpf\") pod \"certified-operators-4jnp6\" (UID: \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\") " pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 07:59:55 crc kubenswrapper[4835]: I0201 07:59:55.968944 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 07:59:56 crc kubenswrapper[4835]: I0201 07:59:56.241739 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4jnp6"] Feb 01 07:59:57 crc kubenswrapper[4835]: I0201 07:59:57.034488 4835 generic.go:334] "Generic (PLEG): container finished" podID="51ab27b9-c1c7-48b0-a4b0-185857b275e3" containerID="96317a993d452eaa0054809f8b1c77e2cf7c7b695470e976783015d81f1ab290" exitCode=0 Feb 01 07:59:57 crc kubenswrapper[4835]: I0201 07:59:57.034580 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jnp6" event={"ID":"51ab27b9-c1c7-48b0-a4b0-185857b275e3","Type":"ContainerDied","Data":"96317a993d452eaa0054809f8b1c77e2cf7c7b695470e976783015d81f1ab290"} Feb 01 07:59:57 crc kubenswrapper[4835]: I0201 07:59:57.034841 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jnp6" event={"ID":"51ab27b9-c1c7-48b0-a4b0-185857b275e3","Type":"ContainerStarted","Data":"34cf027c170ecf8c1066f7cff2eb82430d01141b4a84987b234957bbdf3450df"} Feb 01 07:59:58 crc kubenswrapper[4835]: I0201 07:59:58.042332 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jnp6" event={"ID":"51ab27b9-c1c7-48b0-a4b0-185857b275e3","Type":"ContainerStarted","Data":"c5a96ca5940c37d461db2f9905b430779d5425969ac35eb35d2eac90240d23e0"} Feb 01 07:59:59 crc kubenswrapper[4835]: I0201 07:59:59.055032 4835 generic.go:334] "Generic (PLEG): container finished" podID="51ab27b9-c1c7-48b0-a4b0-185857b275e3" containerID="c5a96ca5940c37d461db2f9905b430779d5425969ac35eb35d2eac90240d23e0" exitCode=0 Feb 01 07:59:59 crc kubenswrapper[4835]: I0201 07:59:59.055079 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jnp6" event={"ID":"51ab27b9-c1c7-48b0-a4b0-185857b275e3","Type":"ContainerDied","Data":"c5a96ca5940c37d461db2f9905b430779d5425969ac35eb35d2eac90240d23e0"} Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.070096 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jnp6" event={"ID":"51ab27b9-c1c7-48b0-a4b0-185857b275e3","Type":"ContainerStarted","Data":"eb3f1d5a12e583e862a8b0d182ddb37b5abc5b2da8d011c7a3b3d01c6b096aa8"} Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.104226 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4jnp6" podStartSLOduration=2.688546499 podStartE2EDuration="5.104199922s" podCreationTimestamp="2026-02-01 07:59:55 +0000 UTC" firstStartedPulling="2026-02-01 07:59:57.036823892 +0000 UTC m=+2270.157260336" lastFinishedPulling="2026-02-01 07:59:59.452477305 +0000 UTC m=+2272.572913759" observedRunningTime="2026-02-01 08:00:00.099247723 +0000 UTC m=+2273.219684247" watchObservedRunningTime="2026-02-01 08:00:00.104199922 +0000 UTC m=+2273.224636396" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.166147 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t"] Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.168583 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.171837 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.171860 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.181809 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t"] Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.259614 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0334d6e7-9af5-4634-ab64-18017a9439df-config-volume\") pod \"collect-profiles-29498880-sbg6t\" (UID: \"0334d6e7-9af5-4634-ab64-18017a9439df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.259676 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnstf\" (UniqueName: \"kubernetes.io/projected/0334d6e7-9af5-4634-ab64-18017a9439df-kube-api-access-qnstf\") pod \"collect-profiles-29498880-sbg6t\" (UID: \"0334d6e7-9af5-4634-ab64-18017a9439df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.259870 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0334d6e7-9af5-4634-ab64-18017a9439df-secret-volume\") pod \"collect-profiles-29498880-sbg6t\" (UID: \"0334d6e7-9af5-4634-ab64-18017a9439df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.360882 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0334d6e7-9af5-4634-ab64-18017a9439df-config-volume\") pod \"collect-profiles-29498880-sbg6t\" (UID: \"0334d6e7-9af5-4634-ab64-18017a9439df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.360923 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnstf\" (UniqueName: \"kubernetes.io/projected/0334d6e7-9af5-4634-ab64-18017a9439df-kube-api-access-qnstf\") pod \"collect-profiles-29498880-sbg6t\" (UID: \"0334d6e7-9af5-4634-ab64-18017a9439df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.360991 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0334d6e7-9af5-4634-ab64-18017a9439df-secret-volume\") pod \"collect-profiles-29498880-sbg6t\" (UID: \"0334d6e7-9af5-4634-ab64-18017a9439df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.362121 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0334d6e7-9af5-4634-ab64-18017a9439df-config-volume\") pod \"collect-profiles-29498880-sbg6t\" (UID: \"0334d6e7-9af5-4634-ab64-18017a9439df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.367645 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0334d6e7-9af5-4634-ab64-18017a9439df-secret-volume\") pod \"collect-profiles-29498880-sbg6t\" (UID: \"0334d6e7-9af5-4634-ab64-18017a9439df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.386232 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnstf\" (UniqueName: \"kubernetes.io/projected/0334d6e7-9af5-4634-ab64-18017a9439df-kube-api-access-qnstf\") pod \"collect-profiles-29498880-sbg6t\" (UID: \"0334d6e7-9af5-4634-ab64-18017a9439df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.502387 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" Feb 01 08:00:00 crc kubenswrapper[4835]: I0201 08:00:00.771337 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t"] Feb 01 08:00:01 crc kubenswrapper[4835]: I0201 08:00:01.084105 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" event={"ID":"0334d6e7-9af5-4634-ab64-18017a9439df","Type":"ContainerStarted","Data":"738d34fc9bdc4a803474eb920a4f2a18316b6617e4c12f62696f18a290c74eb7"} Feb 01 08:00:01 crc kubenswrapper[4835]: I0201 08:00:01.084145 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" event={"ID":"0334d6e7-9af5-4634-ab64-18017a9439df","Type":"ContainerStarted","Data":"4ffea0e07b972060df811e5fc9d81f16f61f3136c1a98aea12ad65dab87c22b9"} Feb 01 08:00:01 crc kubenswrapper[4835]: I0201 08:00:01.109165 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" podStartSLOduration=1.109133597 podStartE2EDuration="1.109133597s" podCreationTimestamp="2026-02-01 08:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 08:00:01.100598284 +0000 UTC m=+2274.221034718" watchObservedRunningTime="2026-02-01 08:00:01.109133597 +0000 UTC m=+2274.229570051" Feb 01 08:00:01 crc kubenswrapper[4835]: I0201 08:00:01.566515 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:00:01 crc kubenswrapper[4835]: I0201 08:00:01.567006 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 08:00:01 crc kubenswrapper[4835]: I0201 08:00:01.567033 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:00:01 crc kubenswrapper[4835]: E0201 08:00:01.567189 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:00:01 crc kubenswrapper[4835]: E0201 08:00:01.567304 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:00:02 crc kubenswrapper[4835]: I0201 08:00:02.095184 4835 generic.go:334] "Generic (PLEG): container finished" podID="0334d6e7-9af5-4634-ab64-18017a9439df" containerID="738d34fc9bdc4a803474eb920a4f2a18316b6617e4c12f62696f18a290c74eb7" exitCode=0 Feb 01 08:00:02 crc kubenswrapper[4835]: I0201 08:00:02.095232 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" event={"ID":"0334d6e7-9af5-4634-ab64-18017a9439df","Type":"ContainerDied","Data":"738d34fc9bdc4a803474eb920a4f2a18316b6617e4c12f62696f18a290c74eb7"} Feb 01 08:00:03 crc kubenswrapper[4835]: I0201 08:00:03.425282 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" Feb 01 08:00:03 crc kubenswrapper[4835]: I0201 08:00:03.509845 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnstf\" (UniqueName: \"kubernetes.io/projected/0334d6e7-9af5-4634-ab64-18017a9439df-kube-api-access-qnstf\") pod \"0334d6e7-9af5-4634-ab64-18017a9439df\" (UID: \"0334d6e7-9af5-4634-ab64-18017a9439df\") " Feb 01 08:00:03 crc kubenswrapper[4835]: I0201 08:00:03.509936 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0334d6e7-9af5-4634-ab64-18017a9439df-config-volume\") pod \"0334d6e7-9af5-4634-ab64-18017a9439df\" (UID: \"0334d6e7-9af5-4634-ab64-18017a9439df\") " Feb 01 08:00:03 crc kubenswrapper[4835]: I0201 08:00:03.509976 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0334d6e7-9af5-4634-ab64-18017a9439df-secret-volume\") pod \"0334d6e7-9af5-4634-ab64-18017a9439df\" (UID: \"0334d6e7-9af5-4634-ab64-18017a9439df\") " Feb 01 08:00:03 crc kubenswrapper[4835]: I0201 08:00:03.510943 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0334d6e7-9af5-4634-ab64-18017a9439df-config-volume" (OuterVolumeSpecName: "config-volume") pod "0334d6e7-9af5-4634-ab64-18017a9439df" (UID: "0334d6e7-9af5-4634-ab64-18017a9439df"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 01 08:00:03 crc kubenswrapper[4835]: I0201 08:00:03.515139 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0334d6e7-9af5-4634-ab64-18017a9439df-kube-api-access-qnstf" (OuterVolumeSpecName: "kube-api-access-qnstf") pod "0334d6e7-9af5-4634-ab64-18017a9439df" (UID: "0334d6e7-9af5-4634-ab64-18017a9439df"). InnerVolumeSpecName "kube-api-access-qnstf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 08:00:03 crc kubenswrapper[4835]: I0201 08:00:03.515191 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0334d6e7-9af5-4634-ab64-18017a9439df-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0334d6e7-9af5-4634-ab64-18017a9439df" (UID: "0334d6e7-9af5-4634-ab64-18017a9439df"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 08:00:03 crc kubenswrapper[4835]: I0201 08:00:03.567059 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 08:00:03 crc kubenswrapper[4835]: I0201 08:00:03.567108 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:00:03 crc kubenswrapper[4835]: E0201 08:00:03.567564 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:00:03 crc kubenswrapper[4835]: I0201 08:00:03.611466 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnstf\" (UniqueName: \"kubernetes.io/projected/0334d6e7-9af5-4634-ab64-18017a9439df-kube-api-access-qnstf\") on node \"crc\" DevicePath \"\"" Feb 01 08:00:03 crc kubenswrapper[4835]: I0201 08:00:03.611492 4835 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0334d6e7-9af5-4634-ab64-18017a9439df-config-volume\") on node \"crc\" DevicePath \"\"" Feb 01 08:00:03 crc kubenswrapper[4835]: I0201 08:00:03.611502 4835 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0334d6e7-9af5-4634-ab64-18017a9439df-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 01 08:00:04 crc kubenswrapper[4835]: I0201 08:00:04.110038 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" event={"ID":"0334d6e7-9af5-4634-ab64-18017a9439df","Type":"ContainerDied","Data":"4ffea0e07b972060df811e5fc9d81f16f61f3136c1a98aea12ad65dab87c22b9"} Feb 01 08:00:04 crc kubenswrapper[4835]: I0201 08:00:04.110085 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ffea0e07b972060df811e5fc9d81f16f61f3136c1a98aea12ad65dab87c22b9" Feb 01 08:00:04 crc kubenswrapper[4835]: I0201 08:00:04.110136 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29498880-sbg6t" Feb 01 08:00:04 crc kubenswrapper[4835]: I0201 08:00:04.515871 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x"] Feb 01 08:00:04 crc kubenswrapper[4835]: I0201 08:00:04.523535 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29498835-zbz9x"] Feb 01 08:00:05 crc kubenswrapper[4835]: I0201 08:00:05.566737 4835 scope.go:117] "RemoveContainer" containerID="423aa0b4aff41f70a2984d1ef0c8d0e0175795d49a51097d89b32c133422941e" Feb 01 08:00:05 crc kubenswrapper[4835]: I0201 08:00:05.567131 4835 scope.go:117] "RemoveContainer" containerID="73ec1f336936452627c4a8e9c497190b4ad0915844d7b342a988b90047ad4972" Feb 01 08:00:05 crc kubenswrapper[4835]: I0201 08:00:05.567214 4835 scope.go:117] "RemoveContainer" containerID="c2d56c28efba2b119273e905106a885bf6c8c70cec0b835aea9fe74b9ae37fd6" Feb 01 08:00:05 crc kubenswrapper[4835]: I0201 08:00:05.578090 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="137b200e-5dcd-43c9-82e2-332071d84cb0" path="/var/lib/kubelet/pods/137b200e-5dcd-43c9-82e2-332071d84cb0/volumes" Feb 01 08:00:05 crc kubenswrapper[4835]: I0201 08:00:05.969780 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 08:00:05 crc kubenswrapper[4835]: I0201 08:00:05.969834 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 08:00:06 crc kubenswrapper[4835]: I0201 08:00:06.020472 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 08:00:06 crc kubenswrapper[4835]: I0201 08:00:06.142792 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895"} Feb 01 08:00:06 crc kubenswrapper[4835]: I0201 08:00:06.145635 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6"} Feb 01 08:00:06 crc kubenswrapper[4835]: I0201 08:00:06.181524 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 08:00:06 crc kubenswrapper[4835]: I0201 08:00:06.250730 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4jnp6"] Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.162318 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9" exitCode=1 Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.162370 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895" exitCode=1 Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.162390 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6" exitCode=1 Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.162443 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9"} Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.162506 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895"} Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.162527 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6"} Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.162542 4835 scope.go:117] "RemoveContainer" containerID="c2d56c28efba2b119273e905106a885bf6c8c70cec0b835aea9fe74b9ae37fd6" Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.163352 4835 scope.go:117] "RemoveContainer" containerID="23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6" Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.163518 4835 scope.go:117] "RemoveContainer" containerID="69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895" Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.163718 4835 scope.go:117] "RemoveContainer" containerID="e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9" Feb 01 08:00:07 crc kubenswrapper[4835]: E0201 08:00:07.164295 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.213649 4835 scope.go:117] "RemoveContainer" containerID="73ec1f336936452627c4a8e9c497190b4ad0915844d7b342a988b90047ad4972" Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.262829 4835 scope.go:117] "RemoveContainer" containerID="423aa0b4aff41f70a2984d1ef0c8d0e0175795d49a51097d89b32c133422941e" Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.574345 4835 scope.go:117] "RemoveContainer" containerID="6f6f9fd3f963aaf7df290a2d825d0aa805464bef1b53143c74d5d8787df0b41e" Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.574546 4835 scope.go:117] "RemoveContainer" containerID="2a58efc23acee73d22ccbe082a09919def8f9135b5ca1d0f04147837777729f0" Feb 01 08:00:07 crc kubenswrapper[4835]: I0201 08:00:07.574823 4835 scope.go:117] "RemoveContainer" containerID="7bd881ed8964128da50b3db280e449aa587ee47d14f89728ca2728626a79a477" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.185351 4835 scope.go:117] "RemoveContainer" containerID="23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.185456 4835 scope.go:117] "RemoveContainer" containerID="69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.185559 4835 scope.go:117] "RemoveContainer" containerID="e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9" Feb 01 08:00:08 crc kubenswrapper[4835]: E0201 08:00:08.186012 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.204232 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792" exitCode=1 Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.204453 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4jnp6" podUID="51ab27b9-c1c7-48b0-a4b0-185857b275e3" containerName="registry-server" containerID="cri-o://eb3f1d5a12e583e862a8b0d182ddb37b5abc5b2da8d011c7a3b3d01c6b096aa8" gracePeriod=2 Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.204712 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da"} Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.205642 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c"} Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.205778 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792"} Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.206466 4835 scope.go:117] "RemoveContainer" containerID="6f6f9fd3f963aaf7df290a2d825d0aa805464bef1b53143c74d5d8787df0b41e" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.207463 4835 scope.go:117] "RemoveContainer" containerID="2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792" Feb 01 08:00:08 crc kubenswrapper[4835]: E0201 08:00:08.209088 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.568201 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.568651 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.568700 4835 scope.go:117] "RemoveContainer" containerID="ec7f7a60f01d2f831b0a1a2281275328733630897c0d8daf5f2c4b53f8d649e9" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.568837 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.568955 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.736640 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 08:00:08 crc kubenswrapper[4835]: E0201 08:00:08.799779 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.898483 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51ab27b9-c1c7-48b0-a4b0-185857b275e3-catalog-content\") pod \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\" (UID: \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\") " Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.898541 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8grpf\" (UniqueName: \"kubernetes.io/projected/51ab27b9-c1c7-48b0-a4b0-185857b275e3-kube-api-access-8grpf\") pod \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\" (UID: \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\") " Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.898621 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51ab27b9-c1c7-48b0-a4b0-185857b275e3-utilities\") pod \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\" (UID: \"51ab27b9-c1c7-48b0-a4b0-185857b275e3\") " Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.899875 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51ab27b9-c1c7-48b0-a4b0-185857b275e3-utilities" (OuterVolumeSpecName: "utilities") pod "51ab27b9-c1c7-48b0-a4b0-185857b275e3" (UID: "51ab27b9-c1c7-48b0-a4b0-185857b275e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.905734 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ab27b9-c1c7-48b0-a4b0-185857b275e3-kube-api-access-8grpf" (OuterVolumeSpecName: "kube-api-access-8grpf") pod "51ab27b9-c1c7-48b0-a4b0-185857b275e3" (UID: "51ab27b9-c1c7-48b0-a4b0-185857b275e3"). InnerVolumeSpecName "kube-api-access-8grpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.952814 4835 scope.go:117] "RemoveContainer" containerID="98c793df94b793188e86124f6ff1a8161f18d725c6666c0e72eb3d6113d10246" Feb 01 08:00:08 crc kubenswrapper[4835]: I0201 08:00:08.955881 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51ab27b9-c1c7-48b0-a4b0-185857b275e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51ab27b9-c1c7-48b0-a4b0-185857b275e3" (UID: "51ab27b9-c1c7-48b0-a4b0-185857b275e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.000545 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51ab27b9-c1c7-48b0-a4b0-185857b275e3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.000575 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8grpf\" (UniqueName: \"kubernetes.io/projected/51ab27b9-c1c7-48b0-a4b0-185857b275e3-kube-api-access-8grpf\") on node \"crc\" DevicePath \"\"" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.000584 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51ab27b9-c1c7-48b0-a4b0-185857b275e3-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.217478 4835 generic.go:334] "Generic (PLEG): container finished" podID="51ab27b9-c1c7-48b0-a4b0-185857b275e3" containerID="eb3f1d5a12e583e862a8b0d182ddb37b5abc5b2da8d011c7a3b3d01c6b096aa8" exitCode=0 Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.217555 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jnp6" event={"ID":"51ab27b9-c1c7-48b0-a4b0-185857b275e3","Type":"ContainerDied","Data":"eb3f1d5a12e583e862a8b0d182ddb37b5abc5b2da8d011c7a3b3d01c6b096aa8"} Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.217598 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4jnp6" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.217973 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4jnp6" event={"ID":"51ab27b9-c1c7-48b0-a4b0-185857b275e3","Type":"ContainerDied","Data":"34cf027c170ecf8c1066f7cff2eb82430d01141b4a84987b234957bbdf3450df"} Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.218032 4835 scope.go:117] "RemoveContainer" containerID="eb3f1d5a12e583e862a8b0d182ddb37b5abc5b2da8d011c7a3b3d01c6b096aa8" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.230073 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da" exitCode=1 Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.230125 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c" exitCode=1 Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.230157 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da"} Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.230222 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c"} Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.231488 4835 scope.go:117] "RemoveContainer" containerID="2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.231693 4835 scope.go:117] "RemoveContainer" containerID="c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.231993 4835 scope.go:117] "RemoveContainer" containerID="718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da" Feb 01 08:00:09 crc kubenswrapper[4835]: E0201 08:00:09.233134 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.244689 4835 scope.go:117] "RemoveContainer" containerID="c5a96ca5940c37d461db2f9905b430779d5425969ac35eb35d2eac90240d23e0" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.247816 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4"} Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.248926 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.249145 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.249449 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.249608 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:00:09 crc kubenswrapper[4835]: E0201 08:00:09.250126 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.280201 4835 scope.go:117] "RemoveContainer" containerID="96317a993d452eaa0054809f8b1c77e2cf7c7b695470e976783015d81f1ab290" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.314534 4835 scope.go:117] "RemoveContainer" containerID="eb3f1d5a12e583e862a8b0d182ddb37b5abc5b2da8d011c7a3b3d01c6b096aa8" Feb 01 08:00:09 crc kubenswrapper[4835]: E0201 08:00:09.316181 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb3f1d5a12e583e862a8b0d182ddb37b5abc5b2da8d011c7a3b3d01c6b096aa8\": container with ID starting with eb3f1d5a12e583e862a8b0d182ddb37b5abc5b2da8d011c7a3b3d01c6b096aa8 not found: ID does not exist" containerID="eb3f1d5a12e583e862a8b0d182ddb37b5abc5b2da8d011c7a3b3d01c6b096aa8" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.316247 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb3f1d5a12e583e862a8b0d182ddb37b5abc5b2da8d011c7a3b3d01c6b096aa8"} err="failed to get container status \"eb3f1d5a12e583e862a8b0d182ddb37b5abc5b2da8d011c7a3b3d01c6b096aa8\": rpc error: code = NotFound desc = could not find container \"eb3f1d5a12e583e862a8b0d182ddb37b5abc5b2da8d011c7a3b3d01c6b096aa8\": container with ID starting with eb3f1d5a12e583e862a8b0d182ddb37b5abc5b2da8d011c7a3b3d01c6b096aa8 not found: ID does not exist" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.316292 4835 scope.go:117] "RemoveContainer" containerID="c5a96ca5940c37d461db2f9905b430779d5425969ac35eb35d2eac90240d23e0" Feb 01 08:00:09 crc kubenswrapper[4835]: E0201 08:00:09.316712 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5a96ca5940c37d461db2f9905b430779d5425969ac35eb35d2eac90240d23e0\": container with ID starting with c5a96ca5940c37d461db2f9905b430779d5425969ac35eb35d2eac90240d23e0 not found: ID does not exist" containerID="c5a96ca5940c37d461db2f9905b430779d5425969ac35eb35d2eac90240d23e0" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.316822 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a96ca5940c37d461db2f9905b430779d5425969ac35eb35d2eac90240d23e0"} err="failed to get container status \"c5a96ca5940c37d461db2f9905b430779d5425969ac35eb35d2eac90240d23e0\": rpc error: code = NotFound desc = could not find container \"c5a96ca5940c37d461db2f9905b430779d5425969ac35eb35d2eac90240d23e0\": container with ID starting with c5a96ca5940c37d461db2f9905b430779d5425969ac35eb35d2eac90240d23e0 not found: ID does not exist" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.316916 4835 scope.go:117] "RemoveContainer" containerID="96317a993d452eaa0054809f8b1c77e2cf7c7b695470e976783015d81f1ab290" Feb 01 08:00:09 crc kubenswrapper[4835]: E0201 08:00:09.317704 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96317a993d452eaa0054809f8b1c77e2cf7c7b695470e976783015d81f1ab290\": container with ID starting with 96317a993d452eaa0054809f8b1c77e2cf7c7b695470e976783015d81f1ab290 not found: ID does not exist" containerID="96317a993d452eaa0054809f8b1c77e2cf7c7b695470e976783015d81f1ab290" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.317763 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96317a993d452eaa0054809f8b1c77e2cf7c7b695470e976783015d81f1ab290"} err="failed to get container status \"96317a993d452eaa0054809f8b1c77e2cf7c7b695470e976783015d81f1ab290\": rpc error: code = NotFound desc = could not find container \"96317a993d452eaa0054809f8b1c77e2cf7c7b695470e976783015d81f1ab290\": container with ID starting with 96317a993d452eaa0054809f8b1c77e2cf7c7b695470e976783015d81f1ab290 not found: ID does not exist" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.317799 4835 scope.go:117] "RemoveContainer" containerID="7bd881ed8964128da50b3db280e449aa587ee47d14f89728ca2728626a79a477" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.318515 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4jnp6"] Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.336153 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4jnp6"] Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.364935 4835 scope.go:117] "RemoveContainer" containerID="2a58efc23acee73d22ccbe082a09919def8f9135b5ca1d0f04147837777729f0" Feb 01 08:00:09 crc kubenswrapper[4835]: I0201 08:00:09.584788 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51ab27b9-c1c7-48b0-a4b0-185857b275e3" path="/var/lib/kubelet/pods/51ab27b9-c1c7-48b0-a4b0-185857b275e3/volumes" Feb 01 08:00:10 crc kubenswrapper[4835]: I0201 08:00:10.266931 4835 scope.go:117] "RemoveContainer" containerID="2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792" Feb 01 08:00:10 crc kubenswrapper[4835]: I0201 08:00:10.267007 4835 scope.go:117] "RemoveContainer" containerID="c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c" Feb 01 08:00:10 crc kubenswrapper[4835]: I0201 08:00:10.267125 4835 scope.go:117] "RemoveContainer" containerID="718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da" Feb 01 08:00:10 crc kubenswrapper[4835]: E0201 08:00:10.267442 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:00:13 crc kubenswrapper[4835]: I0201 08:00:13.566839 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 08:00:13 crc kubenswrapper[4835]: I0201 08:00:13.567443 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:00:13 crc kubenswrapper[4835]: E0201 08:00:13.567923 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:00:14 crc kubenswrapper[4835]: I0201 08:00:14.568664 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:00:14 crc kubenswrapper[4835]: E0201 08:00:14.568993 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:00:15 crc kubenswrapper[4835]: I0201 08:00:15.324147 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="6aaadf97ef22242cf5b15148b8cd42d71eb7c275654a87f6591085d77d846827" exitCode=1 Feb 01 08:00:15 crc kubenswrapper[4835]: I0201 08:00:15.324210 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"6aaadf97ef22242cf5b15148b8cd42d71eb7c275654a87f6591085d77d846827"} Feb 01 08:00:15 crc kubenswrapper[4835]: I0201 08:00:15.324653 4835 scope.go:117] "RemoveContainer" containerID="6b13e362c79ee7da812063d3725213416d72ec13aecff7de5df3b32c3456d592" Feb 01 08:00:15 crc kubenswrapper[4835]: I0201 08:00:15.325250 4835 scope.go:117] "RemoveContainer" containerID="23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6" Feb 01 08:00:15 crc kubenswrapper[4835]: I0201 08:00:15.325307 4835 scope.go:117] "RemoveContainer" containerID="69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895" Feb 01 08:00:15 crc kubenswrapper[4835]: I0201 08:00:15.325328 4835 scope.go:117] "RemoveContainer" containerID="6aaadf97ef22242cf5b15148b8cd42d71eb7c275654a87f6591085d77d846827" Feb 01 08:00:15 crc kubenswrapper[4835]: I0201 08:00:15.325390 4835 scope.go:117] "RemoveContainer" containerID="e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9" Feb 01 08:00:15 crc kubenswrapper[4835]: E0201 08:00:15.325672 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:00:16 crc kubenswrapper[4835]: I0201 08:00:16.566808 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 08:00:16 crc kubenswrapper[4835]: I0201 08:00:16.566851 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:00:16 crc kubenswrapper[4835]: E0201 08:00:16.567209 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:00:21 crc kubenswrapper[4835]: I0201 08:00:21.568003 4835 scope.go:117] "RemoveContainer" containerID="2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792" Feb 01 08:00:21 crc kubenswrapper[4835]: I0201 08:00:21.568659 4835 scope.go:117] "RemoveContainer" containerID="c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c" Feb 01 08:00:21 crc kubenswrapper[4835]: I0201 08:00:21.568923 4835 scope.go:117] "RemoveContainer" containerID="718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da" Feb 01 08:00:21 crc kubenswrapper[4835]: E0201 08:00:21.569645 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:00:22 crc kubenswrapper[4835]: I0201 08:00:22.566799 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:00:22 crc kubenswrapper[4835]: I0201 08:00:22.567181 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:00:22 crc kubenswrapper[4835]: I0201 08:00:22.567279 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:00:22 crc kubenswrapper[4835]: I0201 08:00:22.567322 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:00:22 crc kubenswrapper[4835]: E0201 08:00:22.567740 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:00:25 crc kubenswrapper[4835]: I0201 08:00:25.567446 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:00:25 crc kubenswrapper[4835]: E0201 08:00:25.567819 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:00:28 crc kubenswrapper[4835]: I0201 08:00:28.567488 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 08:00:28 crc kubenswrapper[4835]: I0201 08:00:28.567749 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:00:28 crc kubenswrapper[4835]: E0201 08:00:28.568218 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:00:29 crc kubenswrapper[4835]: I0201 08:00:29.567010 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 08:00:29 crc kubenswrapper[4835]: I0201 08:00:29.567357 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:00:29 crc kubenswrapper[4835]: I0201 08:00:29.567441 4835 scope.go:117] "RemoveContainer" containerID="23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6" Feb 01 08:00:29 crc kubenswrapper[4835]: I0201 08:00:29.567641 4835 scope.go:117] "RemoveContainer" containerID="69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895" Feb 01 08:00:29 crc kubenswrapper[4835]: E0201 08:00:29.567642 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:00:29 crc kubenswrapper[4835]: I0201 08:00:29.567700 4835 scope.go:117] "RemoveContainer" containerID="6aaadf97ef22242cf5b15148b8cd42d71eb7c275654a87f6591085d77d846827" Feb 01 08:00:29 crc kubenswrapper[4835]: I0201 08:00:29.567883 4835 scope.go:117] "RemoveContainer" containerID="e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9" Feb 01 08:00:29 crc kubenswrapper[4835]: E0201 08:00:29.568932 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:00:31 crc kubenswrapper[4835]: I0201 08:00:31.544837 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" exitCode=1 Feb 01 08:00:31 crc kubenswrapper[4835]: I0201 08:00:31.544885 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4"} Feb 01 08:00:31 crc kubenswrapper[4835]: I0201 08:00:31.545359 4835 scope.go:117] "RemoveContainer" containerID="ec7f7a60f01d2f831b0a1a2281275328733630897c0d8daf5f2c4b53f8d649e9" Feb 01 08:00:31 crc kubenswrapper[4835]: I0201 08:00:31.546576 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:00:31 crc kubenswrapper[4835]: I0201 08:00:31.546723 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:00:31 crc kubenswrapper[4835]: I0201 08:00:31.546781 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:00:31 crc kubenswrapper[4835]: I0201 08:00:31.546921 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:00:31 crc kubenswrapper[4835]: I0201 08:00:31.546993 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:00:31 crc kubenswrapper[4835]: E0201 08:00:31.547612 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:00:36 crc kubenswrapper[4835]: I0201 08:00:36.566924 4835 scope.go:117] "RemoveContainer" containerID="2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792" Feb 01 08:00:36 crc kubenswrapper[4835]: I0201 08:00:36.567318 4835 scope.go:117] "RemoveContainer" containerID="c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c" Feb 01 08:00:36 crc kubenswrapper[4835]: I0201 08:00:36.567467 4835 scope.go:117] "RemoveContainer" containerID="718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da" Feb 01 08:00:36 crc kubenswrapper[4835]: E0201 08:00:36.567928 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:00:38 crc kubenswrapper[4835]: I0201 08:00:38.567364 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:00:38 crc kubenswrapper[4835]: E0201 08:00:38.568359 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:00:39 crc kubenswrapper[4835]: I0201 08:00:39.567026 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 08:00:39 crc kubenswrapper[4835]: I0201 08:00:39.567395 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:00:39 crc kubenswrapper[4835]: E0201 08:00:39.567766 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:00:40 crc kubenswrapper[4835]: I0201 08:00:40.566827 4835 scope.go:117] "RemoveContainer" containerID="23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6" Feb 01 08:00:40 crc kubenswrapper[4835]: I0201 08:00:40.566900 4835 scope.go:117] "RemoveContainer" containerID="69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895" Feb 01 08:00:40 crc kubenswrapper[4835]: I0201 08:00:40.566922 4835 scope.go:117] "RemoveContainer" containerID="6aaadf97ef22242cf5b15148b8cd42d71eb7c275654a87f6591085d77d846827" Feb 01 08:00:40 crc kubenswrapper[4835]: I0201 08:00:40.566980 4835 scope.go:117] "RemoveContainer" containerID="e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9" Feb 01 08:00:40 crc kubenswrapper[4835]: E0201 08:00:40.767147 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:00:41 crc kubenswrapper[4835]: I0201 08:00:41.567350 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 08:00:41 crc kubenswrapper[4835]: I0201 08:00:41.567394 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:00:41 crc kubenswrapper[4835]: E0201 08:00:41.567791 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:00:41 crc kubenswrapper[4835]: I0201 08:00:41.649196 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"8bcb519d1f2da511243e672a8e26b9d46f7b5e77272716a991042bab6a914d4d"} Feb 01 08:00:41 crc kubenswrapper[4835]: I0201 08:00:41.649952 4835 scope.go:117] "RemoveContainer" containerID="23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6" Feb 01 08:00:41 crc kubenswrapper[4835]: I0201 08:00:41.650028 4835 scope.go:117] "RemoveContainer" containerID="69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895" Feb 01 08:00:41 crc kubenswrapper[4835]: I0201 08:00:41.650140 4835 scope.go:117] "RemoveContainer" containerID="e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9" Feb 01 08:00:41 crc kubenswrapper[4835]: E0201 08:00:41.650554 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:00:45 crc kubenswrapper[4835]: I0201 08:00:45.567896 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:00:45 crc kubenswrapper[4835]: I0201 08:00:45.568038 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:00:45 crc kubenswrapper[4835]: I0201 08:00:45.568084 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:00:45 crc kubenswrapper[4835]: I0201 08:00:45.568204 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:00:45 crc kubenswrapper[4835]: I0201 08:00:45.568271 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:00:45 crc kubenswrapper[4835]: E0201 08:00:45.568977 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:00:47 crc kubenswrapper[4835]: I0201 08:00:47.580157 4835 scope.go:117] "RemoveContainer" containerID="2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792" Feb 01 08:00:47 crc kubenswrapper[4835]: I0201 08:00:47.580291 4835 scope.go:117] "RemoveContainer" containerID="c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c" Feb 01 08:00:47 crc kubenswrapper[4835]: I0201 08:00:47.580528 4835 scope.go:117] "RemoveContainer" containerID="718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da" Feb 01 08:00:47 crc kubenswrapper[4835]: E0201 08:00:47.581068 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:00:49 crc kubenswrapper[4835]: I0201 08:00:49.567002 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:00:49 crc kubenswrapper[4835]: E0201 08:00:49.567882 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:00:50 crc kubenswrapper[4835]: I0201 08:00:50.567642 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 08:00:50 crc kubenswrapper[4835]: I0201 08:00:50.568731 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:00:50 crc kubenswrapper[4835]: E0201 08:00:50.569378 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:00:52 crc kubenswrapper[4835]: I0201 08:00:52.567596 4835 scope.go:117] "RemoveContainer" containerID="23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6" Feb 01 08:00:52 crc kubenswrapper[4835]: I0201 08:00:52.567724 4835 scope.go:117] "RemoveContainer" containerID="69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895" Feb 01 08:00:52 crc kubenswrapper[4835]: I0201 08:00:52.567901 4835 scope.go:117] "RemoveContainer" containerID="e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9" Feb 01 08:00:52 crc kubenswrapper[4835]: E0201 08:00:52.568363 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:00:55 crc kubenswrapper[4835]: I0201 08:00:55.282121 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:00:55 crc kubenswrapper[4835]: E0201 08:00:55.282393 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 08:00:55 crc kubenswrapper[4835]: E0201 08:00:55.284215 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 08:02:57.284178525 +0000 UTC m=+2450.404614999 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 08:00:56 crc kubenswrapper[4835]: I0201 08:00:56.566987 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 08:00:56 crc kubenswrapper[4835]: I0201 08:00:56.567022 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:00:56 crc kubenswrapper[4835]: E0201 08:00:56.839759 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:00:57 crc kubenswrapper[4835]: I0201 08:00:57.830949 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"62b6a5cb54d4a51567343f41930b30b226710837af82d44b899bbf60472b25a2"} Feb 01 08:00:57 crc kubenswrapper[4835]: I0201 08:00:57.831267 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:00:57 crc kubenswrapper[4835]: I0201 08:00:57.832057 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:00:57 crc kubenswrapper[4835]: E0201 08:00:57.832399 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:00:58 crc kubenswrapper[4835]: I0201 08:00:58.842765 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:00:58 crc kubenswrapper[4835]: E0201 08:00:58.843353 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:00:59 crc kubenswrapper[4835]: I0201 08:00:59.567808 4835 scope.go:117] "RemoveContainer" containerID="2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792" Feb 01 08:00:59 crc kubenswrapper[4835]: I0201 08:00:59.567969 4835 scope.go:117] "RemoveContainer" containerID="c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c" Feb 01 08:00:59 crc kubenswrapper[4835]: I0201 08:00:59.568177 4835 scope.go:117] "RemoveContainer" containerID="718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da" Feb 01 08:00:59 crc kubenswrapper[4835]: E0201 08:00:59.568802 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.167154 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-cron-29498881-kfzg5"] Feb 01 08:01:00 crc kubenswrapper[4835]: E0201 08:01:00.167790 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ab27b9-c1c7-48b0-a4b0-185857b275e3" containerName="extract-content" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.167821 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ab27b9-c1c7-48b0-a4b0-185857b275e3" containerName="extract-content" Feb 01 08:01:00 crc kubenswrapper[4835]: E0201 08:01:00.167863 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ab27b9-c1c7-48b0-a4b0-185857b275e3" containerName="extract-utilities" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.167879 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ab27b9-c1c7-48b0-a4b0-185857b275e3" containerName="extract-utilities" Feb 01 08:01:00 crc kubenswrapper[4835]: E0201 08:01:00.167936 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ab27b9-c1c7-48b0-a4b0-185857b275e3" containerName="registry-server" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.167953 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ab27b9-c1c7-48b0-a4b0-185857b275e3" containerName="registry-server" Feb 01 08:01:00 crc kubenswrapper[4835]: E0201 08:01:00.167975 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0334d6e7-9af5-4634-ab64-18017a9439df" containerName="collect-profiles" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.167989 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="0334d6e7-9af5-4634-ab64-18017a9439df" containerName="collect-profiles" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.168458 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="0334d6e7-9af5-4634-ab64-18017a9439df" containerName="collect-profiles" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.168507 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ab27b9-c1c7-48b0-a4b0-185857b275e3" containerName="registry-server" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.169579 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.199583 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-cron-29498881-kfzg5"] Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.268952 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0c36c8d-897d-4b88-a236-44fe0d511c4e-config-data\") pod \"keystone-cron-29498881-kfzg5\" (UID: \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\") " pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.269051 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f0c36c8d-897d-4b88-a236-44fe0d511c4e-fernet-keys\") pod \"keystone-cron-29498881-kfzg5\" (UID: \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\") " pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.269080 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbkj5\" (UniqueName: \"kubernetes.io/projected/f0c36c8d-897d-4b88-a236-44fe0d511c4e-kube-api-access-pbkj5\") pod \"keystone-cron-29498881-kfzg5\" (UID: \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\") " pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.371171 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0c36c8d-897d-4b88-a236-44fe0d511c4e-config-data\") pod \"keystone-cron-29498881-kfzg5\" (UID: \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\") " pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.371378 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f0c36c8d-897d-4b88-a236-44fe0d511c4e-fernet-keys\") pod \"keystone-cron-29498881-kfzg5\" (UID: \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\") " pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.371478 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbkj5\" (UniqueName: \"kubernetes.io/projected/f0c36c8d-897d-4b88-a236-44fe0d511c4e-kube-api-access-pbkj5\") pod \"keystone-cron-29498881-kfzg5\" (UID: \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\") " pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.379355 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0c36c8d-897d-4b88-a236-44fe0d511c4e-config-data\") pod \"keystone-cron-29498881-kfzg5\" (UID: \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\") " pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.379803 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f0c36c8d-897d-4b88-a236-44fe0d511c4e-fernet-keys\") pod \"keystone-cron-29498881-kfzg5\" (UID: \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\") " pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.389367 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbkj5\" (UniqueName: \"kubernetes.io/projected/f0c36c8d-897d-4b88-a236-44fe0d511c4e-kube-api-access-pbkj5\") pod \"keystone-cron-29498881-kfzg5\" (UID: \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\") " pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.501891 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.567459 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.567933 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.567978 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.568099 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.568172 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:01:00 crc kubenswrapper[4835]: E0201 08:01:00.569036 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.800297 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-cron-29498881-kfzg5"] Feb 01 08:01:00 crc kubenswrapper[4835]: W0201 08:01:00.814612 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0c36c8d_897d_4b88_a236_44fe0d511c4e.slice/crio-0635907b63b582e3196010ebff42cbc35db4a0f30b1cfcdd735453c1a8974860 WatchSource:0}: Error finding container 0635907b63b582e3196010ebff42cbc35db4a0f30b1cfcdd735453c1a8974860: Status 404 returned error can't find the container with id 0635907b63b582e3196010ebff42cbc35db4a0f30b1cfcdd735453c1a8974860 Feb 01 08:01:00 crc kubenswrapper[4835]: I0201 08:01:00.861272 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" event={"ID":"f0c36c8d-897d-4b88-a236-44fe0d511c4e","Type":"ContainerStarted","Data":"0635907b63b582e3196010ebff42cbc35db4a0f30b1cfcdd735453c1a8974860"} Feb 01 08:01:01 crc kubenswrapper[4835]: I0201 08:01:01.021596 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:01:01 crc kubenswrapper[4835]: I0201 08:01:01.566921 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 08:01:01 crc kubenswrapper[4835]: I0201 08:01:01.566968 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:01:01 crc kubenswrapper[4835]: E0201 08:01:01.567473 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:01:01 crc kubenswrapper[4835]: I0201 08:01:01.872150 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" event={"ID":"f0c36c8d-897d-4b88-a236-44fe0d511c4e","Type":"ContainerStarted","Data":"a8ce14dd07f9be90e149056ac48a9e7888229aebe9f4685bf1cce84f193f3985"} Feb 01 08:01:01 crc kubenswrapper[4835]: I0201 08:01:01.897081 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" podStartSLOduration=1.897061566 podStartE2EDuration="1.897061566s" podCreationTimestamp="2026-02-01 08:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 08:01:01.892039035 +0000 UTC m=+2335.012475469" watchObservedRunningTime="2026-02-01 08:01:01.897061566 +0000 UTC m=+2335.017498000" Feb 01 08:01:02 crc kubenswrapper[4835]: I0201 08:01:02.884593 4835 generic.go:334] "Generic (PLEG): container finished" podID="f0c36c8d-897d-4b88-a236-44fe0d511c4e" containerID="a8ce14dd07f9be90e149056ac48a9e7888229aebe9f4685bf1cce84f193f3985" exitCode=0 Feb 01 08:01:02 crc kubenswrapper[4835]: I0201 08:01:02.884690 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" event={"ID":"f0c36c8d-897d-4b88-a236-44fe0d511c4e","Type":"ContainerDied","Data":"a8ce14dd07f9be90e149056ac48a9e7888229aebe9f4685bf1cce84f193f3985"} Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.031512 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.170581 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" Feb 01 08:01:04 crc kubenswrapper[4835]: E0201 08:01:04.265066 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.337264 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbkj5\" (UniqueName: \"kubernetes.io/projected/f0c36c8d-897d-4b88-a236-44fe0d511c4e-kube-api-access-pbkj5\") pod \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\" (UID: \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\") " Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.337380 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f0c36c8d-897d-4b88-a236-44fe0d511c4e-fernet-keys\") pod \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\" (UID: \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\") " Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.337460 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0c36c8d-897d-4b88-a236-44fe0d511c4e-config-data\") pod \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\" (UID: \"f0c36c8d-897d-4b88-a236-44fe0d511c4e\") " Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.343324 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0c36c8d-897d-4b88-a236-44fe0d511c4e-kube-api-access-pbkj5" (OuterVolumeSpecName: "kube-api-access-pbkj5") pod "f0c36c8d-897d-4b88-a236-44fe0d511c4e" (UID: "f0c36c8d-897d-4b88-a236-44fe0d511c4e"). InnerVolumeSpecName "kube-api-access-pbkj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.345337 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0c36c8d-897d-4b88-a236-44fe0d511c4e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f0c36c8d-897d-4b88-a236-44fe0d511c4e" (UID: "f0c36c8d-897d-4b88-a236-44fe0d511c4e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.399690 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0c36c8d-897d-4b88-a236-44fe0d511c4e-config-data" (OuterVolumeSpecName: "config-data") pod "f0c36c8d-897d-4b88-a236-44fe0d511c4e" (UID: "f0c36c8d-897d-4b88-a236-44fe0d511c4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.439578 4835 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f0c36c8d-897d-4b88-a236-44fe0d511c4e-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.439628 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0c36c8d-897d-4b88-a236-44fe0d511c4e-config-data\") on node \"crc\" DevicePath \"\"" Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.439649 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbkj5\" (UniqueName: \"kubernetes.io/projected/f0c36c8d-897d-4b88-a236-44fe0d511c4e-kube-api-access-pbkj5\") on node \"crc\" DevicePath \"\"" Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.567331 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:01:04 crc kubenswrapper[4835]: E0201 08:01:04.567868 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.904220 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.904269 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.904269 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-cron-29498881-kfzg5" event={"ID":"f0c36c8d-897d-4b88-a236-44fe0d511c4e","Type":"ContainerDied","Data":"0635907b63b582e3196010ebff42cbc35db4a0f30b1cfcdd735453c1a8974860"} Feb 01 08:01:04 crc kubenswrapper[4835]: I0201 08:01:04.904347 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0635907b63b582e3196010ebff42cbc35db4a0f30b1cfcdd735453c1a8974860" Feb 01 08:01:05 crc kubenswrapper[4835]: I0201 08:01:05.022152 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.022039 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.022489 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.023042 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"62b6a5cb54d4a51567343f41930b30b226710837af82d44b899bbf60472b25a2"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.023066 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.023102 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://62b6a5cb54d4a51567343f41930b30b226710837af82d44b899bbf60472b25a2" gracePeriod=30 Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.026633 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:01:07 crc kubenswrapper[4835]: E0201 08:01:07.355841 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.578737 4835 scope.go:117] "RemoveContainer" containerID="23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6" Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.579349 4835 scope.go:117] "RemoveContainer" containerID="69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895" Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.579669 4835 scope.go:117] "RemoveContainer" containerID="e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9" Feb 01 08:01:07 crc kubenswrapper[4835]: E0201 08:01:07.580449 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.934992 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="62b6a5cb54d4a51567343f41930b30b226710837af82d44b899bbf60472b25a2" exitCode=0 Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.935040 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"62b6a5cb54d4a51567343f41930b30b226710837af82d44b899bbf60472b25a2"} Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.935070 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1"} Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.935088 4835 scope.go:117] "RemoveContainer" containerID="7ccd4d11227a2bfc73a9f9bdca64ed02baae54e2e9ddce9faae90930176d7553" Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.935315 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:01:07 crc kubenswrapper[4835]: I0201 08:01:07.935882 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:01:07 crc kubenswrapper[4835]: E0201 08:01:07.936210 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:01:08 crc kubenswrapper[4835]: I0201 08:01:08.952192 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:01:08 crc kubenswrapper[4835]: E0201 08:01:08.952563 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:01:10 crc kubenswrapper[4835]: I0201 08:01:10.567578 4835 scope.go:117] "RemoveContainer" containerID="2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792" Feb 01 08:01:10 crc kubenswrapper[4835]: I0201 08:01:10.567988 4835 scope.go:117] "RemoveContainer" containerID="c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c" Feb 01 08:01:10 crc kubenswrapper[4835]: I0201 08:01:10.568189 4835 scope.go:117] "RemoveContainer" containerID="718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da" Feb 01 08:01:10 crc kubenswrapper[4835]: E0201 08:01:10.568740 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:01:11 crc kubenswrapper[4835]: I0201 08:01:11.567123 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:01:11 crc kubenswrapper[4835]: I0201 08:01:11.567564 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:01:11 crc kubenswrapper[4835]: I0201 08:01:11.567600 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:01:11 crc kubenswrapper[4835]: I0201 08:01:11.567696 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:01:11 crc kubenswrapper[4835]: I0201 08:01:11.567750 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:01:11 crc kubenswrapper[4835]: E0201 08:01:11.568206 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:01:12 crc kubenswrapper[4835]: I0201 08:01:12.566886 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 08:01:12 crc kubenswrapper[4835]: I0201 08:01:12.566930 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:01:12 crc kubenswrapper[4835]: E0201 08:01:12.567398 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:01:13 crc kubenswrapper[4835]: I0201 08:01:13.022889 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:01:15 crc kubenswrapper[4835]: I0201 08:01:15.021861 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:01:16 crc kubenswrapper[4835]: I0201 08:01:16.021931 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:01:17 crc kubenswrapper[4835]: I0201 08:01:17.048364 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="811dcfbbfbce2457a26cf2cfd3d7f241f223d0bd48897b5e6e54984050426b01" exitCode=1 Feb 01 08:01:17 crc kubenswrapper[4835]: I0201 08:01:17.048455 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"811dcfbbfbce2457a26cf2cfd3d7f241f223d0bd48897b5e6e54984050426b01"} Feb 01 08:01:17 crc kubenswrapper[4835]: I0201 08:01:17.049784 4835 scope.go:117] "RemoveContainer" containerID="2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792" Feb 01 08:01:17 crc kubenswrapper[4835]: I0201 08:01:17.049942 4835 scope.go:117] "RemoveContainer" containerID="c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c" Feb 01 08:01:17 crc kubenswrapper[4835]: I0201 08:01:17.049988 4835 scope.go:117] "RemoveContainer" containerID="811dcfbbfbce2457a26cf2cfd3d7f241f223d0bd48897b5e6e54984050426b01" Feb 01 08:01:17 crc kubenswrapper[4835]: I0201 08:01:17.050128 4835 scope.go:117] "RemoveContainer" containerID="718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da" Feb 01 08:01:17 crc kubenswrapper[4835]: E0201 08:01:17.245808 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:01:18 crc kubenswrapper[4835]: I0201 08:01:18.074719 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"b617a357ad18b022ef2b099085b4201aaae89a1fe136b06e63fb522686c13160"} Feb 01 08:01:18 crc kubenswrapper[4835]: I0201 08:01:18.075827 4835 scope.go:117] "RemoveContainer" containerID="2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792" Feb 01 08:01:18 crc kubenswrapper[4835]: I0201 08:01:18.075940 4835 scope.go:117] "RemoveContainer" containerID="c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c" Feb 01 08:01:18 crc kubenswrapper[4835]: I0201 08:01:18.076145 4835 scope.go:117] "RemoveContainer" containerID="718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da" Feb 01 08:01:18 crc kubenswrapper[4835]: E0201 08:01:18.076626 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:01:18 crc kubenswrapper[4835]: I0201 08:01:18.567318 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:01:18 crc kubenswrapper[4835]: E0201 08:01:18.567689 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:01:19 crc kubenswrapper[4835]: I0201 08:01:19.022157 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:01:19 crc kubenswrapper[4835]: I0201 08:01:19.022266 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:01:19 crc kubenswrapper[4835]: I0201 08:01:19.023872 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 08:01:19 crc kubenswrapper[4835]: I0201 08:01:19.023929 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:01:19 crc kubenswrapper[4835]: I0201 08:01:19.023977 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" gracePeriod=30 Feb 01 08:01:19 crc kubenswrapper[4835]: I0201 08:01:19.025301 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:01:19 crc kubenswrapper[4835]: E0201 08:01:19.149848 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:01:20 crc kubenswrapper[4835]: I0201 08:01:20.019583 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.100:8080/healthcheck\": dial tcp 10.217.0.100:8080: connect: connection refused" Feb 01 08:01:20 crc kubenswrapper[4835]: I0201 08:01:20.111532 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" exitCode=0 Feb 01 08:01:20 crc kubenswrapper[4835]: I0201 08:01:20.111571 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1"} Feb 01 08:01:20 crc kubenswrapper[4835]: I0201 08:01:20.111606 4835 scope.go:117] "RemoveContainer" containerID="62b6a5cb54d4a51567343f41930b30b226710837af82d44b899bbf60472b25a2" Feb 01 08:01:20 crc kubenswrapper[4835]: I0201 08:01:20.112709 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:01:20 crc kubenswrapper[4835]: I0201 08:01:20.112771 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:01:20 crc kubenswrapper[4835]: E0201 08:01:20.113259 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:01:22 crc kubenswrapper[4835]: I0201 08:01:22.568572 4835 scope.go:117] "RemoveContainer" containerID="23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6" Feb 01 08:01:22 crc kubenswrapper[4835]: I0201 08:01:22.568978 4835 scope.go:117] "RemoveContainer" containerID="69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895" Feb 01 08:01:22 crc kubenswrapper[4835]: I0201 08:01:22.569164 4835 scope.go:117] "RemoveContainer" containerID="e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9" Feb 01 08:01:22 crc kubenswrapper[4835]: E0201 08:01:22.569674 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:01:24 crc kubenswrapper[4835]: I0201 08:01:24.569062 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:01:24 crc kubenswrapper[4835]: I0201 08:01:24.569190 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:01:24 crc kubenswrapper[4835]: I0201 08:01:24.569233 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:01:24 crc kubenswrapper[4835]: I0201 08:01:24.569351 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:01:24 crc kubenswrapper[4835]: I0201 08:01:24.569454 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:01:24 crc kubenswrapper[4835]: E0201 08:01:24.570040 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:01:25 crc kubenswrapper[4835]: I0201 08:01:25.567328 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 08:01:25 crc kubenswrapper[4835]: I0201 08:01:25.567854 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:01:25 crc kubenswrapper[4835]: E0201 08:01:25.568325 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:01:26 crc kubenswrapper[4835]: I0201 08:01:26.174189 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="700112fad0f4ad91d48c44e77419088f8f3cdd322d0db821e4eac71b3672a4b2" exitCode=1 Feb 01 08:01:26 crc kubenswrapper[4835]: I0201 08:01:26.174296 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"700112fad0f4ad91d48c44e77419088f8f3cdd322d0db821e4eac71b3672a4b2"} Feb 01 08:01:26 crc kubenswrapper[4835]: I0201 08:01:26.175940 4835 scope.go:117] "RemoveContainer" containerID="23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6" Feb 01 08:01:26 crc kubenswrapper[4835]: I0201 08:01:26.176153 4835 scope.go:117] "RemoveContainer" containerID="69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895" Feb 01 08:01:26 crc kubenswrapper[4835]: I0201 08:01:26.176386 4835 scope.go:117] "RemoveContainer" containerID="700112fad0f4ad91d48c44e77419088f8f3cdd322d0db821e4eac71b3672a4b2" Feb 01 08:01:26 crc kubenswrapper[4835]: I0201 08:01:26.176479 4835 scope.go:117] "RemoveContainer" containerID="e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9" Feb 01 08:01:26 crc kubenswrapper[4835]: E0201 08:01:26.515005 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:01:27 crc kubenswrapper[4835]: I0201 08:01:27.200123 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" exitCode=1 Feb 01 08:01:27 crc kubenswrapper[4835]: I0201 08:01:27.200378 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"7189761382c146038894eae5d5a8aa21ca1dbcfad23c65e4903f28cd18007996"} Feb 01 08:01:27 crc kubenswrapper[4835]: I0201 08:01:27.200575 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396"} Feb 01 08:01:27 crc kubenswrapper[4835]: I0201 08:01:27.200609 4835 scope.go:117] "RemoveContainer" containerID="23e36b69edb6b2ccb8aaf7f6c2b7e99a11ad5832d65368f173a5de90490917b6" Feb 01 08:01:27 crc kubenswrapper[4835]: I0201 08:01:27.201392 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:01:27 crc kubenswrapper[4835]: I0201 08:01:27.201558 4835 scope.go:117] "RemoveContainer" containerID="69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895" Feb 01 08:01:27 crc kubenswrapper[4835]: I0201 08:01:27.201767 4835 scope.go:117] "RemoveContainer" containerID="e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9" Feb 01 08:01:27 crc kubenswrapper[4835]: E0201 08:01:27.618226 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:01:28 crc kubenswrapper[4835]: I0201 08:01:28.222495 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" exitCode=1 Feb 01 08:01:28 crc kubenswrapper[4835]: I0201 08:01:28.222528 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" exitCode=1 Feb 01 08:01:28 crc kubenswrapper[4835]: I0201 08:01:28.222550 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38"} Feb 01 08:01:28 crc kubenswrapper[4835]: I0201 08:01:28.222656 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112"} Feb 01 08:01:28 crc kubenswrapper[4835]: I0201 08:01:28.222747 4835 scope.go:117] "RemoveContainer" containerID="e4976533799a2595a6569259393b0c6441124d6684c5821b2d7aebb06ca16ed9" Feb 01 08:01:28 crc kubenswrapper[4835]: I0201 08:01:28.223353 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:01:28 crc kubenswrapper[4835]: I0201 08:01:28.223433 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:01:28 crc kubenswrapper[4835]: I0201 08:01:28.223534 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:01:28 crc kubenswrapper[4835]: E0201 08:01:28.223799 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:01:28 crc kubenswrapper[4835]: I0201 08:01:28.297832 4835 scope.go:117] "RemoveContainer" containerID="69c04f75ecf54e2796f6c32c9dd9cbeba95090bc684d2b880a0f6a4caace5895" Feb 01 08:01:29 crc kubenswrapper[4835]: I0201 08:01:29.246579 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:01:29 crc kubenswrapper[4835]: I0201 08:01:29.246742 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:01:29 crc kubenswrapper[4835]: I0201 08:01:29.246994 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:01:29 crc kubenswrapper[4835]: E0201 08:01:29.247638 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:01:29 crc kubenswrapper[4835]: I0201 08:01:29.568576 4835 scope.go:117] "RemoveContainer" containerID="2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792" Feb 01 08:01:29 crc kubenswrapper[4835]: I0201 08:01:29.568735 4835 scope.go:117] "RemoveContainer" containerID="c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c" Feb 01 08:01:29 crc kubenswrapper[4835]: I0201 08:01:29.568932 4835 scope.go:117] "RemoveContainer" containerID="718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da" Feb 01 08:01:30 crc kubenswrapper[4835]: I0201 08:01:30.268463 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" exitCode=1 Feb 01 08:01:30 crc kubenswrapper[4835]: I0201 08:01:30.268656 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d"} Feb 01 08:01:30 crc kubenswrapper[4835]: I0201 08:01:30.268928 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664"} Feb 01 08:01:30 crc kubenswrapper[4835]: I0201 08:01:30.268948 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315"} Feb 01 08:01:30 crc kubenswrapper[4835]: I0201 08:01:30.268971 4835 scope.go:117] "RemoveContainer" containerID="2ea806bb814d70ff372f2180fb34dba40298e5023882c289e712a9c12df57792" Feb 01 08:01:30 crc kubenswrapper[4835]: I0201 08:01:30.269678 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:01:30 crc kubenswrapper[4835]: E0201 08:01:30.270239 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:01:31 crc kubenswrapper[4835]: I0201 08:01:31.316789 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" exitCode=1 Feb 01 08:01:31 crc kubenswrapper[4835]: I0201 08:01:31.316842 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" exitCode=1 Feb 01 08:01:31 crc kubenswrapper[4835]: I0201 08:01:31.316872 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d"} Feb 01 08:01:31 crc kubenswrapper[4835]: I0201 08:01:31.316914 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664"} Feb 01 08:01:31 crc kubenswrapper[4835]: I0201 08:01:31.316947 4835 scope.go:117] "RemoveContainer" containerID="718190857b24b3d5ef0a889d5e59643cc84bf87a465b257c19d00ebe9a6991da" Feb 01 08:01:31 crc kubenswrapper[4835]: I0201 08:01:31.317801 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:01:31 crc kubenswrapper[4835]: I0201 08:01:31.317942 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:01:31 crc kubenswrapper[4835]: I0201 08:01:31.318139 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:01:31 crc kubenswrapper[4835]: E0201 08:01:31.318762 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:01:31 crc kubenswrapper[4835]: I0201 08:01:31.393941 4835 scope.go:117] "RemoveContainer" containerID="c03e2dcb9fe07aa94b8ece651d6516835a102c6faf0c43c07b5d34eea3ed081c" Feb 01 08:01:31 crc kubenswrapper[4835]: I0201 08:01:31.567598 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:01:31 crc kubenswrapper[4835]: E0201 08:01:31.568132 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:01:35 crc kubenswrapper[4835]: I0201 08:01:35.566614 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:01:35 crc kubenswrapper[4835]: I0201 08:01:35.566965 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:01:35 crc kubenswrapper[4835]: E0201 08:01:35.567315 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:01:36 crc kubenswrapper[4835]: I0201 08:01:36.567808 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:01:36 crc kubenswrapper[4835]: I0201 08:01:36.568229 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:01:36 crc kubenswrapper[4835]: I0201 08:01:36.568259 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:01:36 crc kubenswrapper[4835]: I0201 08:01:36.568339 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:01:36 crc kubenswrapper[4835]: I0201 08:01:36.568382 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:01:36 crc kubenswrapper[4835]: E0201 08:01:36.569028 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:01:39 crc kubenswrapper[4835]: I0201 08:01:39.567354 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 08:01:39 crc kubenswrapper[4835]: I0201 08:01:39.567816 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:01:39 crc kubenswrapper[4835]: E0201 08:01:39.568180 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:01:41 crc kubenswrapper[4835]: I0201 08:01:41.567250 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:01:41 crc kubenswrapper[4835]: I0201 08:01:41.567636 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:01:41 crc kubenswrapper[4835]: I0201 08:01:41.567752 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:01:41 crc kubenswrapper[4835]: E0201 08:01:41.568076 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:01:44 crc kubenswrapper[4835]: I0201 08:01:44.567687 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:01:44 crc kubenswrapper[4835]: I0201 08:01:44.567861 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:01:44 crc kubenswrapper[4835]: I0201 08:01:44.568101 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:01:44 crc kubenswrapper[4835]: E0201 08:01:44.568775 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:01:46 crc kubenswrapper[4835]: I0201 08:01:46.566662 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:01:46 crc kubenswrapper[4835]: E0201 08:01:46.567531 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:01:47 crc kubenswrapper[4835]: I0201 08:01:47.576151 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:01:47 crc kubenswrapper[4835]: I0201 08:01:47.576337 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:01:47 crc kubenswrapper[4835]: I0201 08:01:47.576405 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:01:47 crc kubenswrapper[4835]: I0201 08:01:47.576625 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:01:47 crc kubenswrapper[4835]: I0201 08:01:47.576719 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:01:47 crc kubenswrapper[4835]: E0201 08:01:47.577307 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:01:50 crc kubenswrapper[4835]: I0201 08:01:50.567279 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:01:50 crc kubenswrapper[4835]: I0201 08:01:50.567668 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:01:50 crc kubenswrapper[4835]: E0201 08:01:50.568000 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:01:51 crc kubenswrapper[4835]: I0201 08:01:51.566833 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 08:01:51 crc kubenswrapper[4835]: I0201 08:01:51.567292 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:01:51 crc kubenswrapper[4835]: E0201 08:01:51.854826 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:01:52 crc kubenswrapper[4835]: I0201 08:01:52.587902 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"c213ea49d7dafd73fec0de5cdaa6e768dd362d5894fea2a2068751be2aed6e08"} Feb 01 08:01:52 crc kubenswrapper[4835]: I0201 08:01:52.588506 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:01:52 crc kubenswrapper[4835]: I0201 08:01:52.589068 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:01:52 crc kubenswrapper[4835]: E0201 08:01:52.589494 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:01:53 crc kubenswrapper[4835]: I0201 08:01:53.596273 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:01:53 crc kubenswrapper[4835]: E0201 08:01:53.596606 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:01:54 crc kubenswrapper[4835]: I0201 08:01:54.567567 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:01:54 crc kubenswrapper[4835]: I0201 08:01:54.567792 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:01:54 crc kubenswrapper[4835]: I0201 08:01:54.568046 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:01:54 crc kubenswrapper[4835]: E0201 08:01:54.568688 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:01:57 crc kubenswrapper[4835]: I0201 08:01:57.539900 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:01:57 crc kubenswrapper[4835]: I0201 08:01:57.540781 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:01:58 crc kubenswrapper[4835]: I0201 08:01:58.568182 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:01:58 crc kubenswrapper[4835]: I0201 08:01:58.568324 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:01:58 crc kubenswrapper[4835]: I0201 08:01:58.568534 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:01:58 crc kubenswrapper[4835]: E0201 08:01:58.569060 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:01:59 crc kubenswrapper[4835]: I0201 08:01:59.567877 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:01:59 crc kubenswrapper[4835]: E0201 08:01:59.568361 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:02:00 crc kubenswrapper[4835]: I0201 08:02:00.538639 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:02:00 crc kubenswrapper[4835]: I0201 08:02:00.566519 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:02:00 crc kubenswrapper[4835]: I0201 08:02:00.566593 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:02:00 crc kubenswrapper[4835]: I0201 08:02:00.566621 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:02:00 crc kubenswrapper[4835]: I0201 08:02:00.566716 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:02:00 crc kubenswrapper[4835]: I0201 08:02:00.566749 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:02:00 crc kubenswrapper[4835]: E0201 08:02:00.567027 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:02:02 crc kubenswrapper[4835]: I0201 08:02:02.538346 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:02:02 crc kubenswrapper[4835]: I0201 08:02:02.567106 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:02:02 crc kubenswrapper[4835]: I0201 08:02:02.567149 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:02:02 crc kubenswrapper[4835]: E0201 08:02:02.567571 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:02:03 crc kubenswrapper[4835]: I0201 08:02:03.537461 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:02:03 crc kubenswrapper[4835]: I0201 08:02:03.537582 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:02:03 crc kubenswrapper[4835]: I0201 08:02:03.538944 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"c213ea49d7dafd73fec0de5cdaa6e768dd362d5894fea2a2068751be2aed6e08"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 08:02:03 crc kubenswrapper[4835]: I0201 08:02:03.538998 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:02:03 crc kubenswrapper[4835]: I0201 08:02:03.539045 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://c213ea49d7dafd73fec0de5cdaa6e768dd362d5894fea2a2068751be2aed6e08" gracePeriod=30 Feb 01 08:02:03 crc kubenswrapper[4835]: I0201 08:02:03.541074 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:02:03 crc kubenswrapper[4835]: I0201 08:02:03.694749 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="c213ea49d7dafd73fec0de5cdaa6e768dd362d5894fea2a2068751be2aed6e08" exitCode=0 Feb 01 08:02:03 crc kubenswrapper[4835]: I0201 08:02:03.694800 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"c213ea49d7dafd73fec0de5cdaa6e768dd362d5894fea2a2068751be2aed6e08"} Feb 01 08:02:03 crc kubenswrapper[4835]: I0201 08:02:03.694838 4835 scope.go:117] "RemoveContainer" containerID="883ccd57a3905f332990cb8954e5ba8bcd7a455c0cc4e70c73ddbcfe4e1a757c" Feb 01 08:02:03 crc kubenswrapper[4835]: E0201 08:02:03.888391 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:02:04 crc kubenswrapper[4835]: I0201 08:02:04.705269 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9"} Feb 01 08:02:04 crc kubenswrapper[4835]: I0201 08:02:04.706320 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:02:04 crc kubenswrapper[4835]: I0201 08:02:04.706182 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:02:04 crc kubenswrapper[4835]: E0201 08:02:04.707237 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:02:05 crc kubenswrapper[4835]: I0201 08:02:05.567940 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:02:05 crc kubenswrapper[4835]: I0201 08:02:05.568109 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:02:05 crc kubenswrapper[4835]: I0201 08:02:05.568360 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:02:05 crc kubenswrapper[4835]: E0201 08:02:05.569004 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:02:05 crc kubenswrapper[4835]: I0201 08:02:05.717049 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:02:05 crc kubenswrapper[4835]: E0201 08:02:05.717364 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:02:09 crc kubenswrapper[4835]: I0201 08:02:09.538571 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:02:11 crc kubenswrapper[4835]: I0201 08:02:11.566780 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:02:11 crc kubenswrapper[4835]: I0201 08:02:11.567083 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:02:11 crc kubenswrapper[4835]: I0201 08:02:11.567167 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:02:11 crc kubenswrapper[4835]: E0201 08:02:11.567431 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:02:12 crc kubenswrapper[4835]: I0201 08:02:12.537688 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:02:12 crc kubenswrapper[4835]: I0201 08:02:12.537831 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:02:12 crc kubenswrapper[4835]: I0201 08:02:12.566925 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:02:12 crc kubenswrapper[4835]: E0201 08:02:12.567322 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:02:13 crc kubenswrapper[4835]: I0201 08:02:13.568323 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:02:13 crc kubenswrapper[4835]: I0201 08:02:13.568523 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:02:13 crc kubenswrapper[4835]: I0201 08:02:13.568579 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:02:13 crc kubenswrapper[4835]: I0201 08:02:13.568729 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:02:13 crc kubenswrapper[4835]: I0201 08:02:13.568809 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:02:13 crc kubenswrapper[4835]: E0201 08:02:13.569454 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:02:15 crc kubenswrapper[4835]: I0201 08:02:15.539383 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:02:15 crc kubenswrapper[4835]: I0201 08:02:15.539988 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:02:15 crc kubenswrapper[4835]: I0201 08:02:15.540968 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 08:02:15 crc kubenswrapper[4835]: I0201 08:02:15.540997 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:02:15 crc kubenswrapper[4835]: I0201 08:02:15.541031 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" gracePeriod=30 Feb 01 08:02:15 crc kubenswrapper[4835]: I0201 08:02:15.542630 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:02:15 crc kubenswrapper[4835]: E0201 08:02:15.665343 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:02:15 crc kubenswrapper[4835]: I0201 08:02:15.814513 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" exitCode=0 Feb 01 08:02:15 crc kubenswrapper[4835]: I0201 08:02:15.814571 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9"} Feb 01 08:02:15 crc kubenswrapper[4835]: I0201 08:02:15.814618 4835 scope.go:117] "RemoveContainer" containerID="c213ea49d7dafd73fec0de5cdaa6e768dd362d5894fea2a2068751be2aed6e08" Feb 01 08:02:15 crc kubenswrapper[4835]: I0201 08:02:15.815295 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:02:15 crc kubenswrapper[4835]: I0201 08:02:15.815347 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:02:15 crc kubenswrapper[4835]: E0201 08:02:15.815758 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:02:16 crc kubenswrapper[4835]: I0201 08:02:16.567035 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:02:16 crc kubenswrapper[4835]: I0201 08:02:16.567063 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:02:16 crc kubenswrapper[4835]: E0201 08:02:16.567353 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:02:18 crc kubenswrapper[4835]: I0201 08:02:18.567969 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:02:18 crc kubenswrapper[4835]: I0201 08:02:18.568534 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:02:18 crc kubenswrapper[4835]: I0201 08:02:18.568780 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:02:18 crc kubenswrapper[4835]: E0201 08:02:18.569531 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:02:24 crc kubenswrapper[4835]: I0201 08:02:24.907926 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" exitCode=1 Feb 01 08:02:24 crc kubenswrapper[4835]: I0201 08:02:24.907984 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a"} Feb 01 08:02:24 crc kubenswrapper[4835]: I0201 08:02:24.908599 4835 scope.go:117] "RemoveContainer" containerID="9299bf2d1843f2bf2326c5cd40b5b3e3ca4b314793c9ab4ac3d7140160844fa0" Feb 01 08:02:24 crc kubenswrapper[4835]: I0201 08:02:24.909516 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:02:24 crc kubenswrapper[4835]: I0201 08:02:24.909615 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:02:24 crc kubenswrapper[4835]: I0201 08:02:24.909645 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:02:24 crc kubenswrapper[4835]: I0201 08:02:24.909716 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:02:24 crc kubenswrapper[4835]: I0201 08:02:24.909744 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:02:24 crc kubenswrapper[4835]: I0201 08:02:24.909800 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:02:24 crc kubenswrapper[4835]: E0201 08:02:24.910179 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:02:26 crc kubenswrapper[4835]: I0201 08:02:26.567486 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:02:26 crc kubenswrapper[4835]: I0201 08:02:26.567911 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:02:26 crc kubenswrapper[4835]: I0201 08:02:26.567957 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:02:26 crc kubenswrapper[4835]: I0201 08:02:26.568092 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:02:26 crc kubenswrapper[4835]: E0201 08:02:26.568509 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:02:26 crc kubenswrapper[4835]: E0201 08:02:26.568681 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:02:27 crc kubenswrapper[4835]: I0201 08:02:27.572188 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:02:27 crc kubenswrapper[4835]: I0201 08:02:27.572214 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:02:27 crc kubenswrapper[4835]: I0201 08:02:27.572402 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:02:27 crc kubenswrapper[4835]: I0201 08:02:27.572456 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:02:27 crc kubenswrapper[4835]: E0201 08:02:27.572452 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:02:27 crc kubenswrapper[4835]: E0201 08:02:27.572719 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:02:31 crc kubenswrapper[4835]: I0201 08:02:31.567213 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:02:31 crc kubenswrapper[4835]: I0201 08:02:31.567952 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:02:31 crc kubenswrapper[4835]: I0201 08:02:31.568069 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:02:31 crc kubenswrapper[4835]: E0201 08:02:31.568436 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:02:37 crc kubenswrapper[4835]: I0201 08:02:37.572753 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:02:37 crc kubenswrapper[4835]: E0201 08:02:37.573625 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:02:38 crc kubenswrapper[4835]: I0201 08:02:38.567494 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:02:38 crc kubenswrapper[4835]: I0201 08:02:38.567530 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:02:38 crc kubenswrapper[4835]: E0201 08:02:38.567765 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:02:38 crc kubenswrapper[4835]: I0201 08:02:38.568098 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:02:38 crc kubenswrapper[4835]: I0201 08:02:38.568336 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:02:38 crc kubenswrapper[4835]: I0201 08:02:38.568441 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:02:38 crc kubenswrapper[4835]: I0201 08:02:38.568575 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:02:38 crc kubenswrapper[4835]: I0201 08:02:38.568593 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:02:38 crc kubenswrapper[4835]: I0201 08:02:38.568680 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:02:38 crc kubenswrapper[4835]: E0201 08:02:38.569626 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:02:40 crc kubenswrapper[4835]: I0201 08:02:40.567096 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:02:40 crc kubenswrapper[4835]: I0201 08:02:40.567533 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:02:40 crc kubenswrapper[4835]: I0201 08:02:40.567670 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:02:40 crc kubenswrapper[4835]: E0201 08:02:40.568096 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:02:41 crc kubenswrapper[4835]: I0201 08:02:41.567672 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:02:41 crc kubenswrapper[4835]: I0201 08:02:41.567719 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:02:41 crc kubenswrapper[4835]: E0201 08:02:41.568245 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:02:43 crc kubenswrapper[4835]: I0201 08:02:43.567661 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:02:43 crc kubenswrapper[4835]: I0201 08:02:43.568060 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:02:43 crc kubenswrapper[4835]: I0201 08:02:43.568196 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:02:43 crc kubenswrapper[4835]: E0201 08:02:43.568562 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:02:49 crc kubenswrapper[4835]: I0201 08:02:49.567097 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:02:49 crc kubenswrapper[4835]: I0201 08:02:49.567590 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:02:49 crc kubenswrapper[4835]: E0201 08:02:49.567973 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:02:51 crc kubenswrapper[4835]: I0201 08:02:51.567573 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:02:51 crc kubenswrapper[4835]: E0201 08:02:51.568627 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:02:53 crc kubenswrapper[4835]: I0201 08:02:53.567222 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:02:53 crc kubenswrapper[4835]: I0201 08:02:53.567614 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:02:53 crc kubenswrapper[4835]: I0201 08:02:53.567635 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:02:53 crc kubenswrapper[4835]: I0201 08:02:53.567679 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:02:53 crc kubenswrapper[4835]: I0201 08:02:53.567686 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:02:53 crc kubenswrapper[4835]: I0201 08:02:53.567717 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:02:53 crc kubenswrapper[4835]: E0201 08:02:53.568021 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:02:54 crc kubenswrapper[4835]: I0201 08:02:54.566677 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:02:54 crc kubenswrapper[4835]: I0201 08:02:54.566763 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:02:54 crc kubenswrapper[4835]: I0201 08:02:54.566880 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:02:54 crc kubenswrapper[4835]: E0201 08:02:54.567186 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:02:54 crc kubenswrapper[4835]: I0201 08:02:54.569496 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:02:54 crc kubenswrapper[4835]: I0201 08:02:54.569744 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:02:54 crc kubenswrapper[4835]: E0201 08:02:54.570448 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:02:55 crc kubenswrapper[4835]: I0201 08:02:55.567512 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:02:55 crc kubenswrapper[4835]: I0201 08:02:55.567639 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:02:55 crc kubenswrapper[4835]: I0201 08:02:55.567820 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:02:55 crc kubenswrapper[4835]: E0201 08:02:55.568338 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:02:57 crc kubenswrapper[4835]: I0201 08:02:57.328496 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:02:57 crc kubenswrapper[4835]: E0201 08:02:57.328758 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 08:02:57 crc kubenswrapper[4835]: E0201 08:02:57.329131 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 08:04:59.329102704 +0000 UTC m=+2572.449539168 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 08:03:01 crc kubenswrapper[4835]: I0201 08:03:01.566744 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:03:01 crc kubenswrapper[4835]: I0201 08:03:01.567137 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:03:01 crc kubenswrapper[4835]: E0201 08:03:01.567776 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:03:03 crc kubenswrapper[4835]: I0201 08:03:03.566786 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:03:03 crc kubenswrapper[4835]: E0201 08:03:03.567185 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:03:06 crc kubenswrapper[4835]: I0201 08:03:06.567080 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:03:06 crc kubenswrapper[4835]: I0201 08:03:06.567134 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:03:06 crc kubenswrapper[4835]: E0201 08:03:06.567398 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:03:07 crc kubenswrapper[4835]: I0201 08:03:07.573468 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:03:07 crc kubenswrapper[4835]: I0201 08:03:07.573543 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:03:07 crc kubenswrapper[4835]: I0201 08:03:07.573568 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:03:07 crc kubenswrapper[4835]: I0201 08:03:07.573620 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:03:07 crc kubenswrapper[4835]: I0201 08:03:07.573628 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:03:07 crc kubenswrapper[4835]: I0201 08:03:07.573664 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:03:07 crc kubenswrapper[4835]: E0201 08:03:07.574020 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:03:07 crc kubenswrapper[4835]: E0201 08:03:07.905851 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 08:03:08 crc kubenswrapper[4835]: I0201 08:03:08.296991 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:03:08 crc kubenswrapper[4835]: I0201 08:03:08.567624 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:03:08 crc kubenswrapper[4835]: I0201 08:03:08.567750 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:03:08 crc kubenswrapper[4835]: I0201 08:03:08.567922 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:03:08 crc kubenswrapper[4835]: E0201 08:03:08.568588 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:03:10 crc kubenswrapper[4835]: I0201 08:03:10.568588 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:03:10 crc kubenswrapper[4835]: I0201 08:03:10.569110 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:03:10 crc kubenswrapper[4835]: I0201 08:03:10.569288 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:03:10 crc kubenswrapper[4835]: E0201 08:03:10.569858 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:03:14 crc kubenswrapper[4835]: I0201 08:03:14.363368 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="7281a9d7c1d9d8dc16a17f203151e4b7970267f00d4334688eaa717a6dc5211c" exitCode=1 Feb 01 08:03:14 crc kubenswrapper[4835]: I0201 08:03:14.363453 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"7281a9d7c1d9d8dc16a17f203151e4b7970267f00d4334688eaa717a6dc5211c"} Feb 01 08:03:14 crc kubenswrapper[4835]: I0201 08:03:14.364128 4835 scope.go:117] "RemoveContainer" containerID="c79ff7541114600de37a172509eea1cb11eec93c315c86aafccf0b9d756e98ea" Feb 01 08:03:14 crc kubenswrapper[4835]: I0201 08:03:14.365272 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:03:14 crc kubenswrapper[4835]: I0201 08:03:14.365495 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:03:14 crc kubenswrapper[4835]: I0201 08:03:14.365742 4835 scope.go:117] "RemoveContainer" containerID="7281a9d7c1d9d8dc16a17f203151e4b7970267f00d4334688eaa717a6dc5211c" Feb 01 08:03:14 crc kubenswrapper[4835]: I0201 08:03:14.365807 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:03:14 crc kubenswrapper[4835]: E0201 08:03:14.366571 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:03:14 crc kubenswrapper[4835]: I0201 08:03:14.566598 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:03:14 crc kubenswrapper[4835]: E0201 08:03:14.566857 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:03:15 crc kubenswrapper[4835]: I0201 08:03:15.567027 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:03:15 crc kubenswrapper[4835]: I0201 08:03:15.567074 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:03:15 crc kubenswrapper[4835]: E0201 08:03:15.567547 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:03:19 crc kubenswrapper[4835]: I0201 08:03:19.567652 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:03:19 crc kubenswrapper[4835]: I0201 08:03:19.568054 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:03:19 crc kubenswrapper[4835]: E0201 08:03:19.568533 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:03:22 crc kubenswrapper[4835]: I0201 08:03:22.567025 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:03:22 crc kubenswrapper[4835]: I0201 08:03:22.567114 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:03:22 crc kubenswrapper[4835]: I0201 08:03:22.567144 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:03:22 crc kubenswrapper[4835]: I0201 08:03:22.567241 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:03:22 crc kubenswrapper[4835]: I0201 08:03:22.567252 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:03:22 crc kubenswrapper[4835]: I0201 08:03:22.567296 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:03:22 crc kubenswrapper[4835]: I0201 08:03:22.567307 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:03:22 crc kubenswrapper[4835]: I0201 08:03:22.567433 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:03:22 crc kubenswrapper[4835]: I0201 08:03:22.567558 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:03:22 crc kubenswrapper[4835]: E0201 08:03:22.567822 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:03:22 crc kubenswrapper[4835]: E0201 08:03:22.723313 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:03:23 crc kubenswrapper[4835]: I0201 08:03:23.474885 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098"} Feb 01 08:03:23 crc kubenswrapper[4835]: I0201 08:03:23.476205 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:03:23 crc kubenswrapper[4835]: I0201 08:03:23.476361 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:03:23 crc kubenswrapper[4835]: I0201 08:03:23.476604 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:03:23 crc kubenswrapper[4835]: I0201 08:03:23.476640 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:03:23 crc kubenswrapper[4835]: I0201 08:03:23.476729 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:03:23 crc kubenswrapper[4835]: E0201 08:03:23.477534 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:03:28 crc kubenswrapper[4835]: I0201 08:03:28.567118 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:03:28 crc kubenswrapper[4835]: I0201 08:03:28.567646 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:03:28 crc kubenswrapper[4835]: I0201 08:03:28.567769 4835 scope.go:117] "RemoveContainer" containerID="7281a9d7c1d9d8dc16a17f203151e4b7970267f00d4334688eaa717a6dc5211c" Feb 01 08:03:28 crc kubenswrapper[4835]: I0201 08:03:28.567781 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:03:28 crc kubenswrapper[4835]: E0201 08:03:28.765013 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:03:29 crc kubenswrapper[4835]: I0201 08:03:29.538462 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"119c4ce439526ad42a1eff794697c49a5fd68c0530ba39ed7782d5829e417565"} Feb 01 08:03:29 crc kubenswrapper[4835]: I0201 08:03:29.539085 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:03:29 crc kubenswrapper[4835]: I0201 08:03:29.539146 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:03:29 crc kubenswrapper[4835]: I0201 08:03:29.539232 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:03:29 crc kubenswrapper[4835]: E0201 08:03:29.539500 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:03:29 crc kubenswrapper[4835]: I0201 08:03:29.567599 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:03:29 crc kubenswrapper[4835]: I0201 08:03:29.567641 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:03:29 crc kubenswrapper[4835]: I0201 08:03:29.567820 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:03:29 crc kubenswrapper[4835]: E0201 08:03:29.567934 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:03:29 crc kubenswrapper[4835]: E0201 08:03:29.568175 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:03:32 crc kubenswrapper[4835]: I0201 08:03:32.567399 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:03:32 crc kubenswrapper[4835]: I0201 08:03:32.567818 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:03:32 crc kubenswrapper[4835]: E0201 08:03:32.568389 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.096140 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7j2wj"] Feb 01 08:03:35 crc kubenswrapper[4835]: E0201 08:03:35.097015 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0c36c8d-897d-4b88-a236-44fe0d511c4e" containerName="keystone-cron" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.097035 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0c36c8d-897d-4b88-a236-44fe0d511c4e" containerName="keystone-cron" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.097334 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0c36c8d-897d-4b88-a236-44fe0d511c4e" containerName="keystone-cron" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.099179 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.114456 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7j2wj"] Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.177598 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njknp\" (UniqueName: \"kubernetes.io/projected/bebc21e2-e3f2-411b-ade8-2c3137cc286e-kube-api-access-njknp\") pod \"redhat-operators-7j2wj\" (UID: \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\") " pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.177706 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebc21e2-e3f2-411b-ade8-2c3137cc286e-utilities\") pod \"redhat-operators-7j2wj\" (UID: \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\") " pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.177785 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebc21e2-e3f2-411b-ade8-2c3137cc286e-catalog-content\") pod \"redhat-operators-7j2wj\" (UID: \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\") " pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.279265 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebc21e2-e3f2-411b-ade8-2c3137cc286e-catalog-content\") pod \"redhat-operators-7j2wj\" (UID: \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\") " pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.279387 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njknp\" (UniqueName: \"kubernetes.io/projected/bebc21e2-e3f2-411b-ade8-2c3137cc286e-kube-api-access-njknp\") pod \"redhat-operators-7j2wj\" (UID: \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\") " pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.279472 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebc21e2-e3f2-411b-ade8-2c3137cc286e-utilities\") pod \"redhat-operators-7j2wj\" (UID: \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\") " pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.279994 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebc21e2-e3f2-411b-ade8-2c3137cc286e-catalog-content\") pod \"redhat-operators-7j2wj\" (UID: \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\") " pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.279994 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebc21e2-e3f2-411b-ade8-2c3137cc286e-utilities\") pod \"redhat-operators-7j2wj\" (UID: \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\") " pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.302059 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njknp\" (UniqueName: \"kubernetes.io/projected/bebc21e2-e3f2-411b-ade8-2c3137cc286e-kube-api-access-njknp\") pod \"redhat-operators-7j2wj\" (UID: \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\") " pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.419723 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.568525 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.569045 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.569248 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:03:35 crc kubenswrapper[4835]: E0201 08:03:35.569717 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.569988 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.570103 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.570181 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.570190 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.570223 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:03:35 crc kubenswrapper[4835]: E0201 08:03:35.570795 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:03:35 crc kubenswrapper[4835]: I0201 08:03:35.846388 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7j2wj"] Feb 01 08:03:36 crc kubenswrapper[4835]: I0201 08:03:36.610449 4835 generic.go:334] "Generic (PLEG): container finished" podID="bebc21e2-e3f2-411b-ade8-2c3137cc286e" containerID="3d75871b30e9c2f2ae0f507a0249884613d5901f002c9e4fc0e2f9e5e187a3d7" exitCode=0 Feb 01 08:03:36 crc kubenswrapper[4835]: I0201 08:03:36.610554 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j2wj" event={"ID":"bebc21e2-e3f2-411b-ade8-2c3137cc286e","Type":"ContainerDied","Data":"3d75871b30e9c2f2ae0f507a0249884613d5901f002c9e4fc0e2f9e5e187a3d7"} Feb 01 08:03:36 crc kubenswrapper[4835]: I0201 08:03:36.610889 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j2wj" event={"ID":"bebc21e2-e3f2-411b-ade8-2c3137cc286e","Type":"ContainerStarted","Data":"58ed440a20da43dc583d7240e9212e0673c19c5402a0c25302ee77b406df25bc"} Feb 01 08:03:36 crc kubenswrapper[4835]: I0201 08:03:36.612354 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 01 08:03:37 crc kubenswrapper[4835]: I0201 08:03:37.623029 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j2wj" event={"ID":"bebc21e2-e3f2-411b-ade8-2c3137cc286e","Type":"ContainerStarted","Data":"aa4f540a8af4aa43b6bca0f9b11ec832a4b6d8e0accb0adf38f0f3ba2cd668cd"} Feb 01 08:03:38 crc kubenswrapper[4835]: I0201 08:03:38.637943 4835 generic.go:334] "Generic (PLEG): container finished" podID="bebc21e2-e3f2-411b-ade8-2c3137cc286e" containerID="aa4f540a8af4aa43b6bca0f9b11ec832a4b6d8e0accb0adf38f0f3ba2cd668cd" exitCode=0 Feb 01 08:03:38 crc kubenswrapper[4835]: I0201 08:03:38.638042 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j2wj" event={"ID":"bebc21e2-e3f2-411b-ade8-2c3137cc286e","Type":"ContainerDied","Data":"aa4f540a8af4aa43b6bca0f9b11ec832a4b6d8e0accb0adf38f0f3ba2cd668cd"} Feb 01 08:03:39 crc kubenswrapper[4835]: I0201 08:03:39.649556 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j2wj" event={"ID":"bebc21e2-e3f2-411b-ade8-2c3137cc286e","Type":"ContainerStarted","Data":"77fccf4bbf84a324d8a1f4d7b9b41d997773d67042329e89dcce7acc2b1c6457"} Feb 01 08:03:39 crc kubenswrapper[4835]: I0201 08:03:39.675362 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7j2wj" podStartSLOduration=2.209158762 podStartE2EDuration="4.675336864s" podCreationTimestamp="2026-02-01 08:03:35 +0000 UTC" firstStartedPulling="2026-02-01 08:03:36.612151049 +0000 UTC m=+2489.732587483" lastFinishedPulling="2026-02-01 08:03:39.078329111 +0000 UTC m=+2492.198765585" observedRunningTime="2026-02-01 08:03:39.665391805 +0000 UTC m=+2492.785828229" watchObservedRunningTime="2026-02-01 08:03:39.675336864 +0000 UTC m=+2492.795773328" Feb 01 08:03:40 crc kubenswrapper[4835]: I0201 08:03:40.566465 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:03:40 crc kubenswrapper[4835]: E0201 08:03:40.566764 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:03:40 crc kubenswrapper[4835]: I0201 08:03:40.567670 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:03:40 crc kubenswrapper[4835]: I0201 08:03:40.567752 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:03:40 crc kubenswrapper[4835]: I0201 08:03:40.567874 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:03:40 crc kubenswrapper[4835]: E0201 08:03:40.568240 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:03:44 crc kubenswrapper[4835]: I0201 08:03:44.566858 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:03:44 crc kubenswrapper[4835]: I0201 08:03:44.567188 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:03:44 crc kubenswrapper[4835]: I0201 08:03:44.567283 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:03:44 crc kubenswrapper[4835]: I0201 08:03:44.567307 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:03:44 crc kubenswrapper[4835]: E0201 08:03:44.567405 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:03:44 crc kubenswrapper[4835]: E0201 08:03:44.567535 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:03:45 crc kubenswrapper[4835]: I0201 08:03:45.420214 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:45 crc kubenswrapper[4835]: I0201 08:03:45.420267 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:46 crc kubenswrapper[4835]: I0201 08:03:46.466469 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7j2wj" podUID="bebc21e2-e3f2-411b-ade8-2c3137cc286e" containerName="registry-server" probeResult="failure" output=< Feb 01 08:03:46 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Feb 01 08:03:46 crc kubenswrapper[4835]: > Feb 01 08:03:47 crc kubenswrapper[4835]: I0201 08:03:47.591107 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:03:47 crc kubenswrapper[4835]: I0201 08:03:47.591196 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:03:47 crc kubenswrapper[4835]: I0201 08:03:47.591308 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:03:47 crc kubenswrapper[4835]: E0201 08:03:47.592123 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:03:49 crc kubenswrapper[4835]: I0201 08:03:49.568520 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:03:49 crc kubenswrapper[4835]: I0201 08:03:49.568807 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:03:49 crc kubenswrapper[4835]: I0201 08:03:49.568880 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:03:49 crc kubenswrapper[4835]: I0201 08:03:49.568888 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:03:49 crc kubenswrapper[4835]: I0201 08:03:49.568918 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:03:49 crc kubenswrapper[4835]: E0201 08:03:49.569183 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:03:52 crc kubenswrapper[4835]: I0201 08:03:52.567181 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:03:52 crc kubenswrapper[4835]: E0201 08:03:52.567714 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:03:52 crc kubenswrapper[4835]: I0201 08:03:52.567878 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:03:52 crc kubenswrapper[4835]: I0201 08:03:52.567939 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:03:52 crc kubenswrapper[4835]: I0201 08:03:52.568035 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:03:52 crc kubenswrapper[4835]: E0201 08:03:52.568320 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:03:55 crc kubenswrapper[4835]: I0201 08:03:55.484727 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:55 crc kubenswrapper[4835]: I0201 08:03:55.558282 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:55 crc kubenswrapper[4835]: I0201 08:03:55.737029 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7j2wj"] Feb 01 08:03:56 crc kubenswrapper[4835]: I0201 08:03:56.805612 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7j2wj" podUID="bebc21e2-e3f2-411b-ade8-2c3137cc286e" containerName="registry-server" containerID="cri-o://77fccf4bbf84a324d8a1f4d7b9b41d997773d67042329e89dcce7acc2b1c6457" gracePeriod=2 Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.165756 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.273160 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njknp\" (UniqueName: \"kubernetes.io/projected/bebc21e2-e3f2-411b-ade8-2c3137cc286e-kube-api-access-njknp\") pod \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\" (UID: \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\") " Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.273317 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebc21e2-e3f2-411b-ade8-2c3137cc286e-utilities\") pod \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\" (UID: \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\") " Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.273353 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebc21e2-e3f2-411b-ade8-2c3137cc286e-catalog-content\") pod \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\" (UID: \"bebc21e2-e3f2-411b-ade8-2c3137cc286e\") " Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.274246 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bebc21e2-e3f2-411b-ade8-2c3137cc286e-utilities" (OuterVolumeSpecName: "utilities") pod "bebc21e2-e3f2-411b-ade8-2c3137cc286e" (UID: "bebc21e2-e3f2-411b-ade8-2c3137cc286e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.286578 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bebc21e2-e3f2-411b-ade8-2c3137cc286e-kube-api-access-njknp" (OuterVolumeSpecName: "kube-api-access-njknp") pod "bebc21e2-e3f2-411b-ade8-2c3137cc286e" (UID: "bebc21e2-e3f2-411b-ade8-2c3137cc286e"). InnerVolumeSpecName "kube-api-access-njknp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.375102 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebc21e2-e3f2-411b-ade8-2c3137cc286e-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.375132 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njknp\" (UniqueName: \"kubernetes.io/projected/bebc21e2-e3f2-411b-ade8-2c3137cc286e-kube-api-access-njknp\") on node \"crc\" DevicePath \"\"" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.408741 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bebc21e2-e3f2-411b-ade8-2c3137cc286e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bebc21e2-e3f2-411b-ade8-2c3137cc286e" (UID: "bebc21e2-e3f2-411b-ade8-2c3137cc286e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.476126 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebc21e2-e3f2-411b-ade8-2c3137cc286e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.576792 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.576837 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:03:57 crc kubenswrapper[4835]: E0201 08:03:57.577311 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.817832 4835 generic.go:334] "Generic (PLEG): container finished" podID="bebc21e2-e3f2-411b-ade8-2c3137cc286e" containerID="77fccf4bbf84a324d8a1f4d7b9b41d997773d67042329e89dcce7acc2b1c6457" exitCode=0 Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.817880 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j2wj" event={"ID":"bebc21e2-e3f2-411b-ade8-2c3137cc286e","Type":"ContainerDied","Data":"77fccf4bbf84a324d8a1f4d7b9b41d997773d67042329e89dcce7acc2b1c6457"} Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.817931 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7j2wj" event={"ID":"bebc21e2-e3f2-411b-ade8-2c3137cc286e","Type":"ContainerDied","Data":"58ed440a20da43dc583d7240e9212e0673c19c5402a0c25302ee77b406df25bc"} Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.817951 4835 scope.go:117] "RemoveContainer" containerID="77fccf4bbf84a324d8a1f4d7b9b41d997773d67042329e89dcce7acc2b1c6457" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.818152 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7j2wj" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.848977 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7j2wj"] Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.853278 4835 scope.go:117] "RemoveContainer" containerID="aa4f540a8af4aa43b6bca0f9b11ec832a4b6d8e0accb0adf38f0f3ba2cd668cd" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.855571 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7j2wj"] Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.876270 4835 scope.go:117] "RemoveContainer" containerID="3d75871b30e9c2f2ae0f507a0249884613d5901f002c9e4fc0e2f9e5e187a3d7" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.923346 4835 scope.go:117] "RemoveContainer" containerID="77fccf4bbf84a324d8a1f4d7b9b41d997773d67042329e89dcce7acc2b1c6457" Feb 01 08:03:57 crc kubenswrapper[4835]: E0201 08:03:57.923965 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77fccf4bbf84a324d8a1f4d7b9b41d997773d67042329e89dcce7acc2b1c6457\": container with ID starting with 77fccf4bbf84a324d8a1f4d7b9b41d997773d67042329e89dcce7acc2b1c6457 not found: ID does not exist" containerID="77fccf4bbf84a324d8a1f4d7b9b41d997773d67042329e89dcce7acc2b1c6457" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.924019 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77fccf4bbf84a324d8a1f4d7b9b41d997773d67042329e89dcce7acc2b1c6457"} err="failed to get container status \"77fccf4bbf84a324d8a1f4d7b9b41d997773d67042329e89dcce7acc2b1c6457\": rpc error: code = NotFound desc = could not find container \"77fccf4bbf84a324d8a1f4d7b9b41d997773d67042329e89dcce7acc2b1c6457\": container with ID starting with 77fccf4bbf84a324d8a1f4d7b9b41d997773d67042329e89dcce7acc2b1c6457 not found: ID does not exist" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.924050 4835 scope.go:117] "RemoveContainer" containerID="aa4f540a8af4aa43b6bca0f9b11ec832a4b6d8e0accb0adf38f0f3ba2cd668cd" Feb 01 08:03:57 crc kubenswrapper[4835]: E0201 08:03:57.927912 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa4f540a8af4aa43b6bca0f9b11ec832a4b6d8e0accb0adf38f0f3ba2cd668cd\": container with ID starting with aa4f540a8af4aa43b6bca0f9b11ec832a4b6d8e0accb0adf38f0f3ba2cd668cd not found: ID does not exist" containerID="aa4f540a8af4aa43b6bca0f9b11ec832a4b6d8e0accb0adf38f0f3ba2cd668cd" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.928039 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa4f540a8af4aa43b6bca0f9b11ec832a4b6d8e0accb0adf38f0f3ba2cd668cd"} err="failed to get container status \"aa4f540a8af4aa43b6bca0f9b11ec832a4b6d8e0accb0adf38f0f3ba2cd668cd\": rpc error: code = NotFound desc = could not find container \"aa4f540a8af4aa43b6bca0f9b11ec832a4b6d8e0accb0adf38f0f3ba2cd668cd\": container with ID starting with aa4f540a8af4aa43b6bca0f9b11ec832a4b6d8e0accb0adf38f0f3ba2cd668cd not found: ID does not exist" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.929071 4835 scope.go:117] "RemoveContainer" containerID="3d75871b30e9c2f2ae0f507a0249884613d5901f002c9e4fc0e2f9e5e187a3d7" Feb 01 08:03:57 crc kubenswrapper[4835]: E0201 08:03:57.929674 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d75871b30e9c2f2ae0f507a0249884613d5901f002c9e4fc0e2f9e5e187a3d7\": container with ID starting with 3d75871b30e9c2f2ae0f507a0249884613d5901f002c9e4fc0e2f9e5e187a3d7 not found: ID does not exist" containerID="3d75871b30e9c2f2ae0f507a0249884613d5901f002c9e4fc0e2f9e5e187a3d7" Feb 01 08:03:57 crc kubenswrapper[4835]: I0201 08:03:57.929718 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d75871b30e9c2f2ae0f507a0249884613d5901f002c9e4fc0e2f9e5e187a3d7"} err="failed to get container status \"3d75871b30e9c2f2ae0f507a0249884613d5901f002c9e4fc0e2f9e5e187a3d7\": rpc error: code = NotFound desc = could not find container \"3d75871b30e9c2f2ae0f507a0249884613d5901f002c9e4fc0e2f9e5e187a3d7\": container with ID starting with 3d75871b30e9c2f2ae0f507a0249884613d5901f002c9e4fc0e2f9e5e187a3d7 not found: ID does not exist" Feb 01 08:03:58 crc kubenswrapper[4835]: I0201 08:03:58.566822 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:03:58 crc kubenswrapper[4835]: I0201 08:03:58.567187 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:03:58 crc kubenswrapper[4835]: E0201 08:03:58.567606 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:03:59 crc kubenswrapper[4835]: I0201 08:03:59.567434 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:03:59 crc kubenswrapper[4835]: I0201 08:03:59.567519 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:03:59 crc kubenswrapper[4835]: I0201 08:03:59.567635 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:03:59 crc kubenswrapper[4835]: E0201 08:03:59.567956 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:03:59 crc kubenswrapper[4835]: I0201 08:03:59.575400 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bebc21e2-e3f2-411b-ade8-2c3137cc286e" path="/var/lib/kubelet/pods/bebc21e2-e3f2-411b-ade8-2c3137cc286e/volumes" Feb 01 08:04:02 crc kubenswrapper[4835]: I0201 08:04:02.568067 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:04:02 crc kubenswrapper[4835]: I0201 08:04:02.568603 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:04:02 crc kubenswrapper[4835]: I0201 08:04:02.568763 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:04:02 crc kubenswrapper[4835]: I0201 08:04:02.568813 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:04:02 crc kubenswrapper[4835]: I0201 08:04:02.568895 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:04:02 crc kubenswrapper[4835]: E0201 08:04:02.569482 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:04:03 crc kubenswrapper[4835]: I0201 08:04:03.567693 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:04:03 crc kubenswrapper[4835]: I0201 08:04:03.880303 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"946bdd545dcf0243e8d2cbdd7bcdfb0181a2c4c626eff76dbf1ecf3e068ec549"} Feb 01 08:04:06 crc kubenswrapper[4835]: I0201 08:04:06.567244 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:04:06 crc kubenswrapper[4835]: I0201 08:04:06.567921 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:04:06 crc kubenswrapper[4835]: I0201 08:04:06.568101 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:04:06 crc kubenswrapper[4835]: E0201 08:04:06.568612 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:04:10 crc kubenswrapper[4835]: I0201 08:04:10.566736 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:04:10 crc kubenswrapper[4835]: I0201 08:04:10.568168 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:04:10 crc kubenswrapper[4835]: E0201 08:04:10.568561 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:04:11 crc kubenswrapper[4835]: I0201 08:04:11.567275 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:04:11 crc kubenswrapper[4835]: I0201 08:04:11.567783 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:04:11 crc kubenswrapper[4835]: E0201 08:04:11.568232 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:04:12 crc kubenswrapper[4835]: I0201 08:04:12.996808 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="7189761382c146038894eae5d5a8aa21ca1dbcfad23c65e4903f28cd18007996" exitCode=1 Feb 01 08:04:12 crc kubenswrapper[4835]: I0201 08:04:12.996920 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"7189761382c146038894eae5d5a8aa21ca1dbcfad23c65e4903f28cd18007996"} Feb 01 08:04:12 crc kubenswrapper[4835]: I0201 08:04:12.997129 4835 scope.go:117] "RemoveContainer" containerID="700112fad0f4ad91d48c44e77419088f8f3cdd322d0db821e4eac71b3672a4b2" Feb 01 08:04:12 crc kubenswrapper[4835]: I0201 08:04:12.997815 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:04:12 crc kubenswrapper[4835]: I0201 08:04:12.997865 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:04:12 crc kubenswrapper[4835]: I0201 08:04:12.997935 4835 scope.go:117] "RemoveContainer" containerID="7189761382c146038894eae5d5a8aa21ca1dbcfad23c65e4903f28cd18007996" Feb 01 08:04:12 crc kubenswrapper[4835]: I0201 08:04:12.997954 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:04:13 crc kubenswrapper[4835]: E0201 08:04:13.519267 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:04:13 crc kubenswrapper[4835]: I0201 08:04:13.566384 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:04:13 crc kubenswrapper[4835]: I0201 08:04:13.566528 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:04:13 crc kubenswrapper[4835]: I0201 08:04:13.566638 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:04:13 crc kubenswrapper[4835]: I0201 08:04:13.566646 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:04:13 crc kubenswrapper[4835]: I0201 08:04:13.566680 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:04:13 crc kubenswrapper[4835]: E0201 08:04:13.567036 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:04:14 crc kubenswrapper[4835]: I0201 08:04:14.014651 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" exitCode=1 Feb 01 08:04:14 crc kubenswrapper[4835]: I0201 08:04:14.015612 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" exitCode=1 Feb 01 08:04:14 crc kubenswrapper[4835]: I0201 08:04:14.014721 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41"} Feb 01 08:04:14 crc kubenswrapper[4835]: I0201 08:04:14.015767 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5"} Feb 01 08:04:14 crc kubenswrapper[4835]: I0201 08:04:14.015852 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541"} Feb 01 08:04:14 crc kubenswrapper[4835]: I0201 08:04:14.015923 4835 scope.go:117] "RemoveContainer" containerID="bcaf357cf941acd2a995a6899d75295b0c7a7ce6483d06a6c43023494428b112" Feb 01 08:04:14 crc kubenswrapper[4835]: I0201 08:04:14.016552 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:04:14 crc kubenswrapper[4835]: I0201 08:04:14.016643 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:04:14 crc kubenswrapper[4835]: I0201 08:04:14.016754 4835 scope.go:117] "RemoveContainer" containerID="7189761382c146038894eae5d5a8aa21ca1dbcfad23c65e4903f28cd18007996" Feb 01 08:04:14 crc kubenswrapper[4835]: E0201 08:04:14.017116 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:04:14 crc kubenswrapper[4835]: I0201 08:04:14.072529 4835 scope.go:117] "RemoveContainer" containerID="deb7e8f62671085cd48bbd43a88cbb5fae4009897252af2e6b35fd30f6a09396" Feb 01 08:04:15 crc kubenswrapper[4835]: I0201 08:04:15.037954 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" exitCode=1 Feb 01 08:04:15 crc kubenswrapper[4835]: I0201 08:04:15.038014 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41"} Feb 01 08:04:15 crc kubenswrapper[4835]: I0201 08:04:15.038066 4835 scope.go:117] "RemoveContainer" containerID="ba70a69d7656cefb6c802da17a19fb841daabe2c204dfe526d49332649224d38" Feb 01 08:04:15 crc kubenswrapper[4835]: I0201 08:04:15.038976 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:04:15 crc kubenswrapper[4835]: I0201 08:04:15.039093 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:04:15 crc kubenswrapper[4835]: I0201 08:04:15.039241 4835 scope.go:117] "RemoveContainer" containerID="7189761382c146038894eae5d5a8aa21ca1dbcfad23c65e4903f28cd18007996" Feb 01 08:04:15 crc kubenswrapper[4835]: I0201 08:04:15.039255 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:04:15 crc kubenswrapper[4835]: E0201 08:04:15.039821 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:04:16 crc kubenswrapper[4835]: I0201 08:04:16.065212 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:04:16 crc kubenswrapper[4835]: I0201 08:04:16.065277 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:04:16 crc kubenswrapper[4835]: I0201 08:04:16.065349 4835 scope.go:117] "RemoveContainer" containerID="7189761382c146038894eae5d5a8aa21ca1dbcfad23c65e4903f28cd18007996" Feb 01 08:04:16 crc kubenswrapper[4835]: I0201 08:04:16.065355 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:04:16 crc kubenswrapper[4835]: E0201 08:04:16.065774 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:04:20 crc kubenswrapper[4835]: I0201 08:04:20.566522 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:04:20 crc kubenswrapper[4835]: I0201 08:04:20.566868 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:04:20 crc kubenswrapper[4835]: I0201 08:04:20.566971 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:04:21 crc kubenswrapper[4835]: I0201 08:04:21.116290 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8"} Feb 01 08:04:21 crc kubenswrapper[4835]: I0201 08:04:21.116842 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e"} Feb 01 08:04:21 crc kubenswrapper[4835]: I0201 08:04:21.116915 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136"} Feb 01 08:04:21 crc kubenswrapper[4835]: I0201 08:04:21.155482 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-2" podStartSLOduration=351.155461568 podStartE2EDuration="5m51.155461568s" podCreationTimestamp="2026-02-01 07:58:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-01 08:04:21.151207657 +0000 UTC m=+2534.271644111" watchObservedRunningTime="2026-02-01 08:04:21.155461568 +0000 UTC m=+2534.275898022" Feb 01 08:04:22 crc kubenswrapper[4835]: I0201 08:04:22.136686 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" exitCode=1 Feb 01 08:04:22 crc kubenswrapper[4835]: I0201 08:04:22.136725 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" exitCode=1 Feb 01 08:04:22 crc kubenswrapper[4835]: I0201 08:04:22.136735 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" exitCode=1 Feb 01 08:04:22 crc kubenswrapper[4835]: I0201 08:04:22.136744 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8"} Feb 01 08:04:22 crc kubenswrapper[4835]: I0201 08:04:22.136797 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e"} Feb 01 08:04:22 crc kubenswrapper[4835]: I0201 08:04:22.136816 4835 scope.go:117] "RemoveContainer" containerID="5b1bb4344aa56728b56be4e9cfb5a2d1d40bacfb45873185501bd35a0046617d" Feb 01 08:04:22 crc kubenswrapper[4835]: I0201 08:04:22.136951 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136"} Feb 01 08:04:22 crc kubenswrapper[4835]: I0201 08:04:22.137573 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:04:22 crc kubenswrapper[4835]: I0201 08:04:22.137663 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:04:22 crc kubenswrapper[4835]: I0201 08:04:22.137795 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:04:22 crc kubenswrapper[4835]: E0201 08:04:22.138272 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:04:22 crc kubenswrapper[4835]: I0201 08:04:22.187470 4835 scope.go:117] "RemoveContainer" containerID="beb23198e5a331b05389a3dda9f21652a0e5962a637ddce0690fbd90fd62f664" Feb 01 08:04:22 crc kubenswrapper[4835]: I0201 08:04:22.229002 4835 scope.go:117] "RemoveContainer" containerID="1a168f1a2ffdefdcd457f20386065ee064ed231d9cd10e713eb2f53ccb745315" Feb 01 08:04:23 crc kubenswrapper[4835]: I0201 08:04:23.151756 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:04:23 crc kubenswrapper[4835]: I0201 08:04:23.151844 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:04:23 crc kubenswrapper[4835]: I0201 08:04:23.151986 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:04:23 crc kubenswrapper[4835]: E0201 08:04:23.152295 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:04:24 crc kubenswrapper[4835]: I0201 08:04:24.566844 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:04:24 crc kubenswrapper[4835]: I0201 08:04:24.566886 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:04:24 crc kubenswrapper[4835]: E0201 08:04:24.567219 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:04:26 crc kubenswrapper[4835]: I0201 08:04:26.567049 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:04:26 crc kubenswrapper[4835]: I0201 08:04:26.567111 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:04:26 crc kubenswrapper[4835]: E0201 08:04:26.567653 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:04:27 crc kubenswrapper[4835]: I0201 08:04:27.574590 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:04:27 crc kubenswrapper[4835]: I0201 08:04:27.574680 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:04:27 crc kubenswrapper[4835]: I0201 08:04:27.574778 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:04:27 crc kubenswrapper[4835]: I0201 08:04:27.574789 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:04:27 crc kubenswrapper[4835]: I0201 08:04:27.574833 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:04:27 crc kubenswrapper[4835]: E0201 08:04:27.576519 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:04:28 crc kubenswrapper[4835]: I0201 08:04:28.567848 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:04:28 crc kubenswrapper[4835]: I0201 08:04:28.568252 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:04:28 crc kubenswrapper[4835]: I0201 08:04:28.568452 4835 scope.go:117] "RemoveContainer" containerID="7189761382c146038894eae5d5a8aa21ca1dbcfad23c65e4903f28cd18007996" Feb 01 08:04:28 crc kubenswrapper[4835]: I0201 08:04:28.568467 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:04:28 crc kubenswrapper[4835]: E0201 08:04:28.784116 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:04:29 crc kubenswrapper[4835]: I0201 08:04:29.225098 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"3f2186ff77af1c47eb15deb97901f7226557ec5b2ecb431045e2538fb29d941c"} Feb 01 08:04:29 crc kubenswrapper[4835]: I0201 08:04:29.226110 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:04:29 crc kubenswrapper[4835]: I0201 08:04:29.226254 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:04:29 crc kubenswrapper[4835]: I0201 08:04:29.226455 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:04:29 crc kubenswrapper[4835]: E0201 08:04:29.226840 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:04:35 crc kubenswrapper[4835]: I0201 08:04:35.572619 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:04:35 crc kubenswrapper[4835]: I0201 08:04:35.573141 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:04:35 crc kubenswrapper[4835]: I0201 08:04:35.573261 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:04:35 crc kubenswrapper[4835]: E0201 08:04:35.573660 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:04:36 crc kubenswrapper[4835]: I0201 08:04:36.566854 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:04:36 crc kubenswrapper[4835]: I0201 08:04:36.567278 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:04:36 crc kubenswrapper[4835]: E0201 08:04:36.567718 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:04:39 crc kubenswrapper[4835]: I0201 08:04:39.568322 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:04:39 crc kubenswrapper[4835]: I0201 08:04:39.568846 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:04:39 crc kubenswrapper[4835]: I0201 08:04:39.568999 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:04:39 crc kubenswrapper[4835]: I0201 08:04:39.569015 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:04:39 crc kubenswrapper[4835]: I0201 08:04:39.569082 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:04:39 crc kubenswrapper[4835]: E0201 08:04:39.569697 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:04:41 crc kubenswrapper[4835]: I0201 08:04:41.567200 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:04:41 crc kubenswrapper[4835]: I0201 08:04:41.567709 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:04:41 crc kubenswrapper[4835]: E0201 08:04:41.764371 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:04:42 crc kubenswrapper[4835]: I0201 08:04:42.343838 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61"} Feb 01 08:04:42 crc kubenswrapper[4835]: I0201 08:04:42.344263 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:04:42 crc kubenswrapper[4835]: I0201 08:04:42.344786 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:04:42 crc kubenswrapper[4835]: E0201 08:04:42.345335 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:04:43 crc kubenswrapper[4835]: I0201 08:04:43.356069 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" exitCode=1 Feb 01 08:04:43 crc kubenswrapper[4835]: I0201 08:04:43.356115 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61"} Feb 01 08:04:43 crc kubenswrapper[4835]: I0201 08:04:43.356146 4835 scope.go:117] "RemoveContainer" containerID="1135d8a16b34923874e5ef3fca1f9a5bd47b1d3fc741db187c9507a3753fb390" Feb 01 08:04:43 crc kubenswrapper[4835]: I0201 08:04:43.356700 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:04:43 crc kubenswrapper[4835]: I0201 08:04:43.356723 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:04:43 crc kubenswrapper[4835]: E0201 08:04:43.357113 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:04:43 crc kubenswrapper[4835]: I0201 08:04:43.567325 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:04:43 crc kubenswrapper[4835]: I0201 08:04:43.567525 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:04:43 crc kubenswrapper[4835]: I0201 08:04:43.567707 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:04:43 crc kubenswrapper[4835]: E0201 08:04:43.568177 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:04:44 crc kubenswrapper[4835]: I0201 08:04:44.385897 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:04:44 crc kubenswrapper[4835]: I0201 08:04:44.386321 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:04:44 crc kubenswrapper[4835]: E0201 08:04:44.386670 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:04:45 crc kubenswrapper[4835]: I0201 08:04:45.535953 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:04:45 crc kubenswrapper[4835]: I0201 08:04:45.536853 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:04:45 crc kubenswrapper[4835]: I0201 08:04:45.536876 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:04:45 crc kubenswrapper[4835]: E0201 08:04:45.537376 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:04:47 crc kubenswrapper[4835]: I0201 08:04:47.578155 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:04:47 crc kubenswrapper[4835]: I0201 08:04:47.578972 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:04:47 crc kubenswrapper[4835]: I0201 08:04:47.579234 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:04:47 crc kubenswrapper[4835]: E0201 08:04:47.579915 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:04:49 crc kubenswrapper[4835]: I0201 08:04:49.567617 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:04:49 crc kubenswrapper[4835]: I0201 08:04:49.569576 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:04:49 crc kubenswrapper[4835]: E0201 08:04:49.788145 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:04:50 crc kubenswrapper[4835]: I0201 08:04:50.446528 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec"} Feb 01 08:04:50 crc kubenswrapper[4835]: I0201 08:04:50.446850 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:04:50 crc kubenswrapper[4835]: I0201 08:04:50.447180 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:04:50 crc kubenswrapper[4835]: E0201 08:04:50.447502 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:04:50 crc kubenswrapper[4835]: I0201 08:04:50.568599 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:04:50 crc kubenswrapper[4835]: I0201 08:04:50.568749 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:04:50 crc kubenswrapper[4835]: I0201 08:04:50.568901 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:04:50 crc kubenswrapper[4835]: I0201 08:04:50.568916 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:04:50 crc kubenswrapper[4835]: I0201 08:04:50.568979 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:04:50 crc kubenswrapper[4835]: E0201 08:04:50.569633 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:04:51 crc kubenswrapper[4835]: I0201 08:04:51.462894 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" exitCode=1 Feb 01 08:04:51 crc kubenswrapper[4835]: I0201 08:04:51.462963 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec"} Feb 01 08:04:51 crc kubenswrapper[4835]: I0201 08:04:51.463510 4835 scope.go:117] "RemoveContainer" containerID="d066d4212d9307c26c7d9e5b2b4d590cb9286884fad9c084fd09d2f20964190d" Feb 01 08:04:51 crc kubenswrapper[4835]: I0201 08:04:51.463738 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:04:51 crc kubenswrapper[4835]: I0201 08:04:51.463770 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:04:51 crc kubenswrapper[4835]: E0201 08:04:51.464147 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:04:52 crc kubenswrapper[4835]: I0201 08:04:52.019750 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:04:52 crc kubenswrapper[4835]: I0201 08:04:52.477707 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:04:52 crc kubenswrapper[4835]: I0201 08:04:52.478072 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:04:52 crc kubenswrapper[4835]: E0201 08:04:52.478364 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:04:53 crc kubenswrapper[4835]: I0201 08:04:53.487624 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:04:53 crc kubenswrapper[4835]: I0201 08:04:53.487668 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:04:53 crc kubenswrapper[4835]: E0201 08:04:53.488059 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:04:54 crc kubenswrapper[4835]: I0201 08:04:54.567956 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:04:54 crc kubenswrapper[4835]: I0201 08:04:54.568484 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:04:54 crc kubenswrapper[4835]: I0201 08:04:54.568665 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:04:54 crc kubenswrapper[4835]: E0201 08:04:54.569134 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:04:57 crc kubenswrapper[4835]: I0201 08:04:57.579092 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:04:57 crc kubenswrapper[4835]: I0201 08:04:57.579152 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:04:57 crc kubenswrapper[4835]: E0201 08:04:57.579646 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:04:59 crc kubenswrapper[4835]: I0201 08:04:59.429173 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:04:59 crc kubenswrapper[4835]: E0201 08:04:59.429324 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 08:04:59 crc kubenswrapper[4835]: E0201 08:04:59.431686 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 08:07:01.431644452 +0000 UTC m=+2694.552080926 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 08:04:59 crc kubenswrapper[4835]: I0201 08:04:59.567918 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:04:59 crc kubenswrapper[4835]: I0201 08:04:59.568018 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:04:59 crc kubenswrapper[4835]: I0201 08:04:59.568177 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:04:59 crc kubenswrapper[4835]: E0201 08:04:59.568670 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:05:01 crc kubenswrapper[4835]: I0201 08:05:01.567526 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:05:01 crc kubenswrapper[4835]: I0201 08:05:01.567657 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:05:01 crc kubenswrapper[4835]: I0201 08:05:01.567842 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:05:01 crc kubenswrapper[4835]: I0201 08:05:01.567859 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:05:01 crc kubenswrapper[4835]: I0201 08:05:01.567930 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:05:01 crc kubenswrapper[4835]: I0201 08:05:01.588506 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="8bcb519d1f2da511243e672a8e26b9d46f7b5e77272716a991042bab6a914d4d" exitCode=1 Feb 01 08:05:01 crc kubenswrapper[4835]: I0201 08:05:01.588575 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"8bcb519d1f2da511243e672a8e26b9d46f7b5e77272716a991042bab6a914d4d"} Feb 01 08:05:01 crc kubenswrapper[4835]: I0201 08:05:01.588625 4835 scope.go:117] "RemoveContainer" containerID="6aaadf97ef22242cf5b15148b8cd42d71eb7c275654a87f6591085d77d846827" Feb 01 08:05:01 crc kubenswrapper[4835]: I0201 08:05:01.590511 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:05:01 crc kubenswrapper[4835]: I0201 08:05:01.590962 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:05:01 crc kubenswrapper[4835]: I0201 08:05:01.591302 4835 scope.go:117] "RemoveContainer" containerID="8bcb519d1f2da511243e672a8e26b9d46f7b5e77272716a991042bab6a914d4d" Feb 01 08:05:01 crc kubenswrapper[4835]: I0201 08:05:01.591638 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:05:01 crc kubenswrapper[4835]: E0201 08:05:01.593110 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:05:02 crc kubenswrapper[4835]: E0201 08:05:02.308963 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:05:02 crc kubenswrapper[4835]: I0201 08:05:02.606159 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" exitCode=1 Feb 01 08:05:02 crc kubenswrapper[4835]: I0201 08:05:02.606212 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" exitCode=1 Feb 01 08:05:02 crc kubenswrapper[4835]: I0201 08:05:02.606301 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763"} Feb 01 08:05:02 crc kubenswrapper[4835]: I0201 08:05:02.606338 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d"} Feb 01 08:05:02 crc kubenswrapper[4835]: I0201 08:05:02.606357 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd"} Feb 01 08:05:02 crc kubenswrapper[4835]: I0201 08:05:02.606379 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c"} Feb 01 08:05:02 crc kubenswrapper[4835]: I0201 08:05:02.606405 4835 scope.go:117] "RemoveContainer" containerID="3807f64d47a377a2bd605873c4923efbd186a758ddbbc494cee41f02ace0dd90" Feb 01 08:05:02 crc kubenswrapper[4835]: I0201 08:05:02.607392 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:05:02 crc kubenswrapper[4835]: I0201 08:05:02.607548 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:05:02 crc kubenswrapper[4835]: I0201 08:05:02.607723 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:05:02 crc kubenswrapper[4835]: E0201 08:05:02.608219 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:05:02 crc kubenswrapper[4835]: I0201 08:05:02.677941 4835 scope.go:117] "RemoveContainer" containerID="aaf2720d3a819bc588966df31c8062823efa25fd3fc876174d4fceea32da098b" Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.636870 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="119c4ce439526ad42a1eff794697c49a5fd68c0530ba39ed7782d5829e417565" exitCode=1 Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.636958 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"119c4ce439526ad42a1eff794697c49a5fd68c0530ba39ed7782d5829e417565"} Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.637031 4835 scope.go:117] "RemoveContainer" containerID="7281a9d7c1d9d8dc16a17f203151e4b7970267f00d4334688eaa717a6dc5211c" Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.637840 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.637916 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.638003 4835 scope.go:117] "RemoveContainer" containerID="119c4ce439526ad42a1eff794697c49a5fd68c0530ba39ed7782d5829e417565" Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.638029 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:05:03 crc kubenswrapper[4835]: E0201 08:05:03.638639 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.650807 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" exitCode=1 Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.650919 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" exitCode=1 Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.650866 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763"} Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.651056 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d"} Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.651696 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.651757 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.651878 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.651889 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.651930 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:05:03 crc kubenswrapper[4835]: E0201 08:05:03.652216 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.707049 4835 scope.go:117] "RemoveContainer" containerID="da94c4b49a85b3d78b7fe6f6c34f81c4b4f32c72ac12ae87fc85dd72c4281f9b" Feb 01 08:05:03 crc kubenswrapper[4835]: I0201 08:05:03.751885 4835 scope.go:117] "RemoveContainer" containerID="2a1708182a0f52858779eb159afdd848df05e03da50061161216cde3066909be" Feb 01 08:05:05 crc kubenswrapper[4835]: I0201 08:05:05.566985 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:05:05 crc kubenswrapper[4835]: I0201 08:05:05.567270 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:05:05 crc kubenswrapper[4835]: E0201 08:05:05.567548 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:05:11 crc kubenswrapper[4835]: E0201 08:05:11.298800 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 08:05:11 crc kubenswrapper[4835]: I0201 08:05:11.730572 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:05:12 crc kubenswrapper[4835]: I0201 08:05:12.566922 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:05:12 crc kubenswrapper[4835]: I0201 08:05:12.566970 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:05:12 crc kubenswrapper[4835]: E0201 08:05:12.567366 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:05:14 crc kubenswrapper[4835]: I0201 08:05:14.567759 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:05:14 crc kubenswrapper[4835]: I0201 08:05:14.568283 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:05:14 crc kubenswrapper[4835]: I0201 08:05:14.568322 4835 scope.go:117] "RemoveContainer" containerID="8bcb519d1f2da511243e672a8e26b9d46f7b5e77272716a991042bab6a914d4d" Feb 01 08:05:14 crc kubenswrapper[4835]: I0201 08:05:14.568450 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:05:14 crc kubenswrapper[4835]: E0201 08:05:14.568885 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:05:15 crc kubenswrapper[4835]: I0201 08:05:15.567553 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:05:15 crc kubenswrapper[4835]: I0201 08:05:15.567627 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:05:15 crc kubenswrapper[4835]: I0201 08:05:15.567700 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:05:15 crc kubenswrapper[4835]: I0201 08:05:15.567708 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:05:15 crc kubenswrapper[4835]: I0201 08:05:15.567740 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:05:15 crc kubenswrapper[4835]: E0201 08:05:15.760076 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:05:15 crc kubenswrapper[4835]: I0201 08:05:15.782246 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf"} Feb 01 08:05:15 crc kubenswrapper[4835]: I0201 08:05:15.783386 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:05:15 crc kubenswrapper[4835]: I0201 08:05:15.783574 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:05:15 crc kubenswrapper[4835]: I0201 08:05:15.783837 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:05:15 crc kubenswrapper[4835]: I0201 08:05:15.783977 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:05:15 crc kubenswrapper[4835]: E0201 08:05:15.784528 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:05:16 crc kubenswrapper[4835]: I0201 08:05:16.567476 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:05:16 crc kubenswrapper[4835]: I0201 08:05:16.567620 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:05:16 crc kubenswrapper[4835]: I0201 08:05:16.567796 4835 scope.go:117] "RemoveContainer" containerID="119c4ce439526ad42a1eff794697c49a5fd68c0530ba39ed7782d5829e417565" Feb 01 08:05:16 crc kubenswrapper[4835]: I0201 08:05:16.567812 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:05:16 crc kubenswrapper[4835]: E0201 08:05:16.568391 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:05:18 crc kubenswrapper[4835]: I0201 08:05:18.566510 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:05:18 crc kubenswrapper[4835]: I0201 08:05:18.566913 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:05:18 crc kubenswrapper[4835]: E0201 08:05:18.567226 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:05:25 crc kubenswrapper[4835]: I0201 08:05:25.568563 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:05:25 crc kubenswrapper[4835]: I0201 08:05:25.570622 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:05:25 crc kubenswrapper[4835]: I0201 08:05:25.570685 4835 scope.go:117] "RemoveContainer" containerID="8bcb519d1f2da511243e672a8e26b9d46f7b5e77272716a991042bab6a914d4d" Feb 01 08:05:25 crc kubenswrapper[4835]: I0201 08:05:25.570810 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:05:25 crc kubenswrapper[4835]: E0201 08:05:25.571398 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:05:26 crc kubenswrapper[4835]: I0201 08:05:26.570584 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:05:26 crc kubenswrapper[4835]: I0201 08:05:26.570619 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:05:26 crc kubenswrapper[4835]: E0201 08:05:26.570905 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:05:29 crc kubenswrapper[4835]: I0201 08:05:29.566876 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:05:29 crc kubenswrapper[4835]: I0201 08:05:29.567345 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:05:29 crc kubenswrapper[4835]: E0201 08:05:29.567807 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:05:30 crc kubenswrapper[4835]: I0201 08:05:30.568210 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:05:30 crc kubenswrapper[4835]: I0201 08:05:30.568287 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:05:30 crc kubenswrapper[4835]: I0201 08:05:30.568355 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:05:30 crc kubenswrapper[4835]: I0201 08:05:30.568432 4835 scope.go:117] "RemoveContainer" containerID="119c4ce439526ad42a1eff794697c49a5fd68c0530ba39ed7782d5829e417565" Feb 01 08:05:30 crc kubenswrapper[4835]: I0201 08:05:30.568445 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:05:30 crc kubenswrapper[4835]: I0201 08:05:30.568508 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:05:30 crc kubenswrapper[4835]: I0201 08:05:30.568710 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:05:30 crc kubenswrapper[4835]: I0201 08:05:30.568808 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:05:30 crc kubenswrapper[4835]: E0201 08:05:30.569480 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:05:30 crc kubenswrapper[4835]: E0201 08:05:30.780316 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:05:30 crc kubenswrapper[4835]: I0201 08:05:30.935328 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"8e5073f26383eeb4c40644914a83b6b270ec7d095e593a2bfb93470d60b385bd"} Feb 01 08:05:30 crc kubenswrapper[4835]: I0201 08:05:30.935734 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:05:30 crc kubenswrapper[4835]: I0201 08:05:30.935791 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:05:30 crc kubenswrapper[4835]: I0201 08:05:30.935873 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:05:30 crc kubenswrapper[4835]: E0201 08:05:30.936091 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:05:34 crc kubenswrapper[4835]: I0201 08:05:34.977864 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" exitCode=1 Feb 01 08:05:34 crc kubenswrapper[4835]: I0201 08:05:34.977938 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098"} Feb 01 08:05:34 crc kubenswrapper[4835]: I0201 08:05:34.978805 4835 scope.go:117] "RemoveContainer" containerID="0e32c69ff19092090a438de316ea4536df4c3bad86b49454e5632c8185b99bf4" Feb 01 08:05:34 crc kubenswrapper[4835]: I0201 08:05:34.979831 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:05:34 crc kubenswrapper[4835]: I0201 08:05:34.979957 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:05:34 crc kubenswrapper[4835]: I0201 08:05:34.980005 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:05:34 crc kubenswrapper[4835]: I0201 08:05:34.980163 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:05:34 crc kubenswrapper[4835]: I0201 08:05:34.980253 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:05:34 crc kubenswrapper[4835]: E0201 08:05:34.980910 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:05:36 crc kubenswrapper[4835]: I0201 08:05:36.567887 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:05:36 crc kubenswrapper[4835]: I0201 08:05:36.568324 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:05:36 crc kubenswrapper[4835]: I0201 08:05:36.568353 4835 scope.go:117] "RemoveContainer" containerID="8bcb519d1f2da511243e672a8e26b9d46f7b5e77272716a991042bab6a914d4d" Feb 01 08:05:36 crc kubenswrapper[4835]: I0201 08:05:36.568536 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:05:36 crc kubenswrapper[4835]: E0201 08:05:36.568917 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:05:40 crc kubenswrapper[4835]: I0201 08:05:40.568247 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:05:40 crc kubenswrapper[4835]: I0201 08:05:40.568818 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:05:40 crc kubenswrapper[4835]: E0201 08:05:40.569382 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:05:42 crc kubenswrapper[4835]: I0201 08:05:42.567589 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:05:42 crc kubenswrapper[4835]: I0201 08:05:42.567643 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:05:42 crc kubenswrapper[4835]: E0201 08:05:42.568129 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:05:44 crc kubenswrapper[4835]: I0201 08:05:44.567748 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:05:44 crc kubenswrapper[4835]: I0201 08:05:44.568200 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:05:44 crc kubenswrapper[4835]: I0201 08:05:44.568397 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:05:44 crc kubenswrapper[4835]: E0201 08:05:44.568957 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:05:49 crc kubenswrapper[4835]: I0201 08:05:49.567664 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:05:49 crc kubenswrapper[4835]: I0201 08:05:49.568634 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:05:49 crc kubenswrapper[4835]: I0201 08:05:49.568685 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:05:49 crc kubenswrapper[4835]: I0201 08:05:49.568811 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:05:49 crc kubenswrapper[4835]: I0201 08:05:49.568876 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:05:49 crc kubenswrapper[4835]: E0201 08:05:49.569566 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:05:50 crc kubenswrapper[4835]: I0201 08:05:50.566958 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:05:50 crc kubenswrapper[4835]: I0201 08:05:50.567051 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:05:50 crc kubenswrapper[4835]: I0201 08:05:50.567080 4835 scope.go:117] "RemoveContainer" containerID="8bcb519d1f2da511243e672a8e26b9d46f7b5e77272716a991042bab6a914d4d" Feb 01 08:05:50 crc kubenswrapper[4835]: I0201 08:05:50.567148 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:05:50 crc kubenswrapper[4835]: E0201 08:05:50.773937 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:05:51 crc kubenswrapper[4835]: I0201 08:05:51.160458 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"a173a7d4dfce7a09af6df1da942081f7f4d13b9bb491a5259c66bbecc01f055e"} Feb 01 08:05:51 crc kubenswrapper[4835]: I0201 08:05:51.161697 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:05:51 crc kubenswrapper[4835]: I0201 08:05:51.161792 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:05:51 crc kubenswrapper[4835]: I0201 08:05:51.161924 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:05:51 crc kubenswrapper[4835]: E0201 08:05:51.162364 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:05:51 crc kubenswrapper[4835]: I0201 08:05:51.567339 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:05:51 crc kubenswrapper[4835]: I0201 08:05:51.567398 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:05:51 crc kubenswrapper[4835]: E0201 08:05:51.567990 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:05:56 crc kubenswrapper[4835]: I0201 08:05:56.567626 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:05:56 crc kubenswrapper[4835]: I0201 08:05:56.568017 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:05:56 crc kubenswrapper[4835]: E0201 08:05:56.568484 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:05:57 crc kubenswrapper[4835]: I0201 08:05:57.573047 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:05:57 crc kubenswrapper[4835]: I0201 08:05:57.573147 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:05:57 crc kubenswrapper[4835]: I0201 08:05:57.573275 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:05:57 crc kubenswrapper[4835]: E0201 08:05:57.573647 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:06:03 crc kubenswrapper[4835]: I0201 08:06:03.567216 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:06:03 crc kubenswrapper[4835]: I0201 08:06:03.568000 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:06:03 crc kubenswrapper[4835]: E0201 08:06:03.568399 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:06:03 crc kubenswrapper[4835]: I0201 08:06:03.568490 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:06:03 crc kubenswrapper[4835]: I0201 08:06:03.568594 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:06:03 crc kubenswrapper[4835]: I0201 08:06:03.568626 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:06:03 crc kubenswrapper[4835]: I0201 08:06:03.568713 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:06:03 crc kubenswrapper[4835]: I0201 08:06:03.568762 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:06:03 crc kubenswrapper[4835]: E0201 08:06:03.569155 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:06:04 crc kubenswrapper[4835]: I0201 08:06:04.568318 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:06:04 crc kubenswrapper[4835]: I0201 08:06:04.568543 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:06:04 crc kubenswrapper[4835]: I0201 08:06:04.568775 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:06:04 crc kubenswrapper[4835]: E0201 08:06:04.569464 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:06:09 crc kubenswrapper[4835]: I0201 08:06:09.339522 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="8e5073f26383eeb4c40644914a83b6b270ec7d095e593a2bfb93470d60b385bd" exitCode=1 Feb 01 08:06:09 crc kubenswrapper[4835]: I0201 08:06:09.339637 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"8e5073f26383eeb4c40644914a83b6b270ec7d095e593a2bfb93470d60b385bd"} Feb 01 08:06:09 crc kubenswrapper[4835]: I0201 08:06:09.340385 4835 scope.go:117] "RemoveContainer" containerID="119c4ce439526ad42a1eff794697c49a5fd68c0530ba39ed7782d5829e417565" Feb 01 08:06:09 crc kubenswrapper[4835]: I0201 08:06:09.341259 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:06:09 crc kubenswrapper[4835]: I0201 08:06:09.341456 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:06:09 crc kubenswrapper[4835]: I0201 08:06:09.341639 4835 scope.go:117] "RemoveContainer" containerID="8e5073f26383eeb4c40644914a83b6b270ec7d095e593a2bfb93470d60b385bd" Feb 01 08:06:09 crc kubenswrapper[4835]: I0201 08:06:09.341687 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:06:09 crc kubenswrapper[4835]: E0201 08:06:09.342496 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:06:09 crc kubenswrapper[4835]: I0201 08:06:09.567545 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:06:09 crc kubenswrapper[4835]: I0201 08:06:09.567733 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:06:09 crc kubenswrapper[4835]: E0201 08:06:09.568083 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:06:15 crc kubenswrapper[4835]: I0201 08:06:15.418195 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="b617a357ad18b022ef2b099085b4201aaae89a1fe136b06e63fb522686c13160" exitCode=1 Feb 01 08:06:15 crc kubenswrapper[4835]: I0201 08:06:15.418275 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"b617a357ad18b022ef2b099085b4201aaae89a1fe136b06e63fb522686c13160"} Feb 01 08:06:15 crc kubenswrapper[4835]: I0201 08:06:15.418884 4835 scope.go:117] "RemoveContainer" containerID="811dcfbbfbce2457a26cf2cfd3d7f241f223d0bd48897b5e6e54984050426b01" Feb 01 08:06:15 crc kubenswrapper[4835]: I0201 08:06:15.420034 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:06:15 crc kubenswrapper[4835]: I0201 08:06:15.420190 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:06:15 crc kubenswrapper[4835]: I0201 08:06:15.420257 4835 scope.go:117] "RemoveContainer" containerID="b617a357ad18b022ef2b099085b4201aaae89a1fe136b06e63fb522686c13160" Feb 01 08:06:15 crc kubenswrapper[4835]: I0201 08:06:15.420466 4835 scope.go:117] "RemoveContainer" containerID="8e5073f26383eeb4c40644914a83b6b270ec7d095e593a2bfb93470d60b385bd" Feb 01 08:06:15 crc kubenswrapper[4835]: I0201 08:06:15.420490 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:06:15 crc kubenswrapper[4835]: E0201 08:06:15.421251 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 10s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:06:15 crc kubenswrapper[4835]: I0201 08:06:15.567318 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:06:15 crc kubenswrapper[4835]: I0201 08:06:15.567370 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:06:15 crc kubenswrapper[4835]: E0201 08:06:15.567900 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:06:16 crc kubenswrapper[4835]: I0201 08:06:16.567634 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:06:16 crc kubenswrapper[4835]: I0201 08:06:16.568004 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:06:16 crc kubenswrapper[4835]: I0201 08:06:16.568122 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:06:16 crc kubenswrapper[4835]: E0201 08:06:16.568446 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:06:18 crc kubenswrapper[4835]: I0201 08:06:18.568362 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:06:18 crc kubenswrapper[4835]: I0201 08:06:18.568535 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:06:18 crc kubenswrapper[4835]: I0201 08:06:18.568585 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:06:18 crc kubenswrapper[4835]: I0201 08:06:18.568715 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:06:18 crc kubenswrapper[4835]: I0201 08:06:18.568805 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:06:18 crc kubenswrapper[4835]: E0201 08:06:18.569468 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:06:22 crc kubenswrapper[4835]: I0201 08:06:22.566887 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:06:22 crc kubenswrapper[4835]: I0201 08:06:22.567237 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:06:22 crc kubenswrapper[4835]: E0201 08:06:22.778577 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:06:23 crc kubenswrapper[4835]: I0201 08:06:23.504192 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"3dc281f01a9ac16c0c33d02b21534ea95495ca1e657991f992efda8792bd3fb4"} Feb 01 08:06:23 crc kubenswrapper[4835]: I0201 08:06:23.504452 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:06:23 crc kubenswrapper[4835]: I0201 08:06:23.504800 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:06:23 crc kubenswrapper[4835]: E0201 08:06:23.505135 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:06:24 crc kubenswrapper[4835]: I0201 08:06:24.515214 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:06:24 crc kubenswrapper[4835]: E0201 08:06:24.515972 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:06:25 crc kubenswrapper[4835]: I0201 08:06:25.192285 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 08:06:25 crc kubenswrapper[4835]: I0201 08:06:25.192371 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 08:06:28 crc kubenswrapper[4835]: I0201 08:06:28.023298 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:06:29 crc kubenswrapper[4835]: I0201 08:06:29.852458 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:06:29 crc kubenswrapper[4835]: I0201 08:06:29.852497 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:06:29 crc kubenswrapper[4835]: E0201 08:06:29.862735 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.020856 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.568402 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.568612 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.568676 4835 scope.go:117] "RemoveContainer" containerID="b617a357ad18b022ef2b099085b4201aaae89a1fe136b06e63fb522686c13160" Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.568818 4835 scope.go:117] "RemoveContainer" containerID="8e5073f26383eeb4c40644914a83b6b270ec7d095e593a2bfb93470d60b385bd" Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.568834 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.569452 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.569619 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.569866 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:06:30 crc kubenswrapper[4835]: E0201 08:06:30.570596 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:06:30 crc kubenswrapper[4835]: E0201 08:06:30.779447 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.890880 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"92e19c163e2de72bfddfab94aa60f51bee78d43c0a21f8ad5a34915b58f7acf3"} Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.891788 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.891874 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.894666 4835 scope.go:117] "RemoveContainer" containerID="8e5073f26383eeb4c40644914a83b6b270ec7d095e593a2bfb93470d60b385bd" Feb 01 08:06:30 crc kubenswrapper[4835]: I0201 08:06:30.894703 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:06:30 crc kubenswrapper[4835]: E0201 08:06:30.896217 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:06:31 crc kubenswrapper[4835]: I0201 08:06:31.021329 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:06:33 crc kubenswrapper[4835]: I0201 08:06:33.568787 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:06:33 crc kubenswrapper[4835]: I0201 08:06:33.569807 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:06:33 crc kubenswrapper[4835]: I0201 08:06:33.569875 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:06:33 crc kubenswrapper[4835]: I0201 08:06:33.570041 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:06:33 crc kubenswrapper[4835]: I0201 08:06:33.570129 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:06:33 crc kubenswrapper[4835]: E0201 08:06:33.571038 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:06:34 crc kubenswrapper[4835]: I0201 08:06:34.022647 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:06:34 crc kubenswrapper[4835]: I0201 08:06:34.022770 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:06:34 crc kubenswrapper[4835]: I0201 08:06:34.023706 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"3dc281f01a9ac16c0c33d02b21534ea95495ca1e657991f992efda8792bd3fb4"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 08:06:34 crc kubenswrapper[4835]: I0201 08:06:34.023749 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:06:34 crc kubenswrapper[4835]: I0201 08:06:34.023796 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://3dc281f01a9ac16c0c33d02b21534ea95495ca1e657991f992efda8792bd3fb4" gracePeriod=30 Feb 01 08:06:34 crc kubenswrapper[4835]: I0201 08:06:34.028291 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:06:34 crc kubenswrapper[4835]: E0201 08:06:34.323773 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:06:34 crc kubenswrapper[4835]: I0201 08:06:34.927339 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="3dc281f01a9ac16c0c33d02b21534ea95495ca1e657991f992efda8792bd3fb4" exitCode=0 Feb 01 08:06:34 crc kubenswrapper[4835]: I0201 08:06:34.927400 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"3dc281f01a9ac16c0c33d02b21534ea95495ca1e657991f992efda8792bd3fb4"} Feb 01 08:06:34 crc kubenswrapper[4835]: I0201 08:06:34.927739 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140"} Feb 01 08:06:34 crc kubenswrapper[4835]: I0201 08:06:34.927761 4835 scope.go:117] "RemoveContainer" containerID="2b96934ec42777c83ec3ee306e98f917a2620cea47920da84df61961fedda2d1" Feb 01 08:06:34 crc kubenswrapper[4835]: I0201 08:06:34.927917 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:06:34 crc kubenswrapper[4835]: I0201 08:06:34.928281 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:06:34 crc kubenswrapper[4835]: E0201 08:06:34.928521 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:06:35 crc kubenswrapper[4835]: I0201 08:06:35.940914 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:06:35 crc kubenswrapper[4835]: E0201 08:06:35.941129 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:06:40 crc kubenswrapper[4835]: I0201 08:06:40.023292 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:06:40 crc kubenswrapper[4835]: I0201 08:06:40.023354 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:06:43 crc kubenswrapper[4835]: I0201 08:06:43.021669 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:06:43 crc kubenswrapper[4835]: I0201 08:06:43.567644 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:06:43 crc kubenswrapper[4835]: I0201 08:06:43.567724 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:06:43 crc kubenswrapper[4835]: I0201 08:06:43.567829 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:06:43 crc kubenswrapper[4835]: E0201 08:06:43.568122 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:06:43 crc kubenswrapper[4835]: I0201 08:06:43.568272 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:06:43 crc kubenswrapper[4835]: I0201 08:06:43.568356 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:06:43 crc kubenswrapper[4835]: I0201 08:06:43.568527 4835 scope.go:117] "RemoveContainer" containerID="8e5073f26383eeb4c40644914a83b6b270ec7d095e593a2bfb93470d60b385bd" Feb 01 08:06:43 crc kubenswrapper[4835]: I0201 08:06:43.568539 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:06:43 crc kubenswrapper[4835]: E0201 08:06:43.568987 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:06:44 crc kubenswrapper[4835]: I0201 08:06:44.567247 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:06:44 crc kubenswrapper[4835]: I0201 08:06:44.568497 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:06:44 crc kubenswrapper[4835]: E0201 08:06:44.568745 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:06:45 crc kubenswrapper[4835]: I0201 08:06:45.021314 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:06:46 crc kubenswrapper[4835]: I0201 08:06:46.021270 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:06:46 crc kubenswrapper[4835]: I0201 08:06:46.022188 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:06:46 crc kubenswrapper[4835]: I0201 08:06:46.023096 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 08:06:46 crc kubenswrapper[4835]: I0201 08:06:46.023256 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:06:46 crc kubenswrapper[4835]: I0201 08:06:46.023404 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" gracePeriod=30 Feb 01 08:06:46 crc kubenswrapper[4835]: I0201 08:06:46.025373 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:06:46 crc kubenswrapper[4835]: E0201 08:06:46.238593 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:06:47 crc kubenswrapper[4835]: I0201 08:06:47.062724 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" exitCode=0 Feb 01 08:06:47 crc kubenswrapper[4835]: I0201 08:06:47.062842 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140"} Feb 01 08:06:47 crc kubenswrapper[4835]: I0201 08:06:47.062991 4835 scope.go:117] "RemoveContainer" containerID="3dc281f01a9ac16c0c33d02b21534ea95495ca1e657991f992efda8792bd3fb4" Feb 01 08:06:47 crc kubenswrapper[4835]: I0201 08:06:47.063981 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:06:47 crc kubenswrapper[4835]: I0201 08:06:47.064026 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:06:47 crc kubenswrapper[4835]: E0201 08:06:47.064399 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:06:47 crc kubenswrapper[4835]: I0201 08:06:47.576280 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:06:47 crc kubenswrapper[4835]: I0201 08:06:47.576735 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:06:47 crc kubenswrapper[4835]: I0201 08:06:47.576768 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:06:47 crc kubenswrapper[4835]: I0201 08:06:47.576855 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:06:47 crc kubenswrapper[4835]: I0201 08:06:47.576901 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:06:47 crc kubenswrapper[4835]: E0201 08:06:47.577318 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:06:54 crc kubenswrapper[4835]: I0201 08:06:54.146956 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" exitCode=1 Feb 01 08:06:54 crc kubenswrapper[4835]: I0201 08:06:54.147001 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf"} Feb 01 08:06:54 crc kubenswrapper[4835]: I0201 08:06:54.147397 4835 scope.go:117] "RemoveContainer" containerID="a418d0c4620d18c5a00f66e02a19f54db3e31314477050232692a5aef922b99a" Feb 01 08:06:54 crc kubenswrapper[4835]: I0201 08:06:54.148388 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:06:54 crc kubenswrapper[4835]: I0201 08:06:54.148540 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:06:54 crc kubenswrapper[4835]: I0201 08:06:54.148585 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:06:54 crc kubenswrapper[4835]: I0201 08:06:54.148678 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:06:54 crc kubenswrapper[4835]: I0201 08:06:54.148710 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:06:54 crc kubenswrapper[4835]: I0201 08:06:54.148776 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:06:54 crc kubenswrapper[4835]: E0201 08:06:54.149321 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:06:55 crc kubenswrapper[4835]: I0201 08:06:55.191924 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 08:06:55 crc kubenswrapper[4835]: I0201 08:06:55.191994 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 08:06:57 crc kubenswrapper[4835]: I0201 08:06:57.577649 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:06:57 crc kubenswrapper[4835]: I0201 08:06:57.578146 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:06:57 crc kubenswrapper[4835]: I0201 08:06:57.578229 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:06:57 crc kubenswrapper[4835]: I0201 08:06:57.578315 4835 scope.go:117] "RemoveContainer" containerID="8e5073f26383eeb4c40644914a83b6b270ec7d095e593a2bfb93470d60b385bd" Feb 01 08:06:57 crc kubenswrapper[4835]: I0201 08:06:57.578333 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:06:57 crc kubenswrapper[4835]: I0201 08:06:57.578552 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:06:57 crc kubenswrapper[4835]: I0201 08:06:57.578776 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:06:57 crc kubenswrapper[4835]: E0201 08:06:57.579278 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:06:57 crc kubenswrapper[4835]: E0201 08:06:57.785037 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:06:58 crc kubenswrapper[4835]: I0201 08:06:58.218159 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f"} Feb 01 08:06:58 crc kubenswrapper[4835]: I0201 08:06:58.219249 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:06:58 crc kubenswrapper[4835]: I0201 08:06:58.219369 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:06:58 crc kubenswrapper[4835]: I0201 08:06:58.219587 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:06:58 crc kubenswrapper[4835]: E0201 08:06:58.220163 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:06:59 crc kubenswrapper[4835]: I0201 08:06:59.568024 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:06:59 crc kubenswrapper[4835]: I0201 08:06:59.568086 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:06:59 crc kubenswrapper[4835]: I0201 08:06:59.568154 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:06:59 crc kubenswrapper[4835]: I0201 08:06:59.568179 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:06:59 crc kubenswrapper[4835]: E0201 08:06:59.568683 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:06:59 crc kubenswrapper[4835]: E0201 08:06:59.568728 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:07:01 crc kubenswrapper[4835]: I0201 08:07:01.488666 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:07:01 crc kubenswrapper[4835]: E0201 08:07:01.488934 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 08:07:01 crc kubenswrapper[4835]: E0201 08:07:01.489358 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 08:09:03.489330546 +0000 UTC m=+2816.609767020 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 08:07:09 crc kubenswrapper[4835]: I0201 08:07:09.568112 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:07:09 crc kubenswrapper[4835]: I0201 08:07:09.569612 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:07:09 crc kubenswrapper[4835]: I0201 08:07:09.569649 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:07:09 crc kubenswrapper[4835]: I0201 08:07:09.569702 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:07:09 crc kubenswrapper[4835]: I0201 08:07:09.569710 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:07:09 crc kubenswrapper[4835]: I0201 08:07:09.569749 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:07:09 crc kubenswrapper[4835]: E0201 08:07:09.570124 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:07:10 crc kubenswrapper[4835]: I0201 08:07:10.566450 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:07:10 crc kubenswrapper[4835]: I0201 08:07:10.566494 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:07:10 crc kubenswrapper[4835]: I0201 08:07:10.566790 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:07:10 crc kubenswrapper[4835]: I0201 08:07:10.566934 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:07:10 crc kubenswrapper[4835]: I0201 08:07:10.567189 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:07:10 crc kubenswrapper[4835]: I0201 08:07:10.567321 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:07:10 crc kubenswrapper[4835]: E0201 08:07:10.566923 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:07:10 crc kubenswrapper[4835]: I0201 08:07:10.567503 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:07:10 crc kubenswrapper[4835]: I0201 08:07:10.567730 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:07:10 crc kubenswrapper[4835]: E0201 08:07:10.567750 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:07:10 crc kubenswrapper[4835]: E0201 08:07:10.568355 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:07:11 crc kubenswrapper[4835]: I0201 08:07:11.567593 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:07:11 crc kubenswrapper[4835]: I0201 08:07:11.568067 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:07:11 crc kubenswrapper[4835]: E0201 08:07:11.568626 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:07:14 crc kubenswrapper[4835]: E0201 08:07:14.732737 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 08:07:15 crc kubenswrapper[4835]: I0201 08:07:15.402906 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:07:20 crc kubenswrapper[4835]: I0201 08:07:20.566980 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:07:20 crc kubenswrapper[4835]: I0201 08:07:20.567109 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:07:20 crc kubenswrapper[4835]: I0201 08:07:20.567153 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:07:20 crc kubenswrapper[4835]: I0201 08:07:20.567398 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:07:20 crc kubenswrapper[4835]: I0201 08:07:20.567417 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:07:20 crc kubenswrapper[4835]: I0201 08:07:20.567509 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:07:20 crc kubenswrapper[4835]: E0201 08:07:20.568180 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:07:21 crc kubenswrapper[4835]: I0201 08:07:21.567034 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:07:21 crc kubenswrapper[4835]: I0201 08:07:21.567358 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:07:21 crc kubenswrapper[4835]: E0201 08:07:21.737225 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:07:22 crc kubenswrapper[4835]: I0201 08:07:22.474404 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"e912da0c0e046b602987517a0d1588b0f8b6e72a8848b6c5352c65880ccfe5af"} Feb 01 08:07:22 crc kubenswrapper[4835]: I0201 08:07:22.475215 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:07:22 crc kubenswrapper[4835]: E0201 08:07:22.475564 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:07:22 crc kubenswrapper[4835]: I0201 08:07:22.475811 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:07:22 crc kubenswrapper[4835]: I0201 08:07:22.567632 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:07:22 crc kubenswrapper[4835]: I0201 08:07:22.567689 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:07:22 crc kubenswrapper[4835]: E0201 08:07:22.568071 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:07:23 crc kubenswrapper[4835]: I0201 08:07:23.483393 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:07:23 crc kubenswrapper[4835]: E0201 08:07:23.484096 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:07:23 crc kubenswrapper[4835]: I0201 08:07:23.567641 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:07:23 crc kubenswrapper[4835]: I0201 08:07:23.567775 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:07:23 crc kubenswrapper[4835]: I0201 08:07:23.567957 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:07:23 crc kubenswrapper[4835]: E0201 08:07:23.568451 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:07:25 crc kubenswrapper[4835]: I0201 08:07:25.191277 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 08:07:25 crc kubenswrapper[4835]: I0201 08:07:25.191335 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 08:07:25 crc kubenswrapper[4835]: I0201 08:07:25.191376 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 08:07:25 crc kubenswrapper[4835]: I0201 08:07:25.191961 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"946bdd545dcf0243e8d2cbdd7bcdfb0181a2c4c626eff76dbf1ecf3e068ec549"} pod="openshift-machine-config-operator/machine-config-daemon-wdt78" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 08:07:25 crc kubenswrapper[4835]: I0201 08:07:25.192043 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" containerID="cri-o://946bdd545dcf0243e8d2cbdd7bcdfb0181a2c4c626eff76dbf1ecf3e068ec549" gracePeriod=600 Feb 01 08:07:25 crc kubenswrapper[4835]: I0201 08:07:25.504533 4835 generic.go:334] "Generic (PLEG): container finished" podID="303c450e-4b2d-4908-84e6-df8b444ed640" containerID="946bdd545dcf0243e8d2cbdd7bcdfb0181a2c4c626eff76dbf1ecf3e068ec549" exitCode=0 Feb 01 08:07:25 crc kubenswrapper[4835]: I0201 08:07:25.504598 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerDied","Data":"946bdd545dcf0243e8d2cbdd7bcdfb0181a2c4c626eff76dbf1ecf3e068ec549"} Feb 01 08:07:25 crc kubenswrapper[4835]: I0201 08:07:25.504698 4835 scope.go:117] "RemoveContainer" containerID="3e4314b57f0a368e20ab131998d995f2a88fa6754f2b5bc5a05673969a2186b8" Feb 01 08:07:25 crc kubenswrapper[4835]: I0201 08:07:25.568293 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:07:25 crc kubenswrapper[4835]: I0201 08:07:25.568458 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:07:25 crc kubenswrapper[4835]: I0201 08:07:25.568847 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:07:25 crc kubenswrapper[4835]: E0201 08:07:25.569859 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:07:26 crc kubenswrapper[4835]: I0201 08:07:26.515794 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerStarted","Data":"5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df"} Feb 01 08:07:27 crc kubenswrapper[4835]: I0201 08:07:27.539572 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:07:27 crc kubenswrapper[4835]: I0201 08:07:27.539929 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:07:30 crc kubenswrapper[4835]: I0201 08:07:30.539108 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:07:31 crc kubenswrapper[4835]: I0201 08:07:31.566923 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:07:31 crc kubenswrapper[4835]: I0201 08:07:31.566992 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:07:31 crc kubenswrapper[4835]: I0201 08:07:31.567013 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:07:31 crc kubenswrapper[4835]: I0201 08:07:31.567059 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:07:31 crc kubenswrapper[4835]: I0201 08:07:31.567066 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:07:31 crc kubenswrapper[4835]: I0201 08:07:31.567099 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:07:31 crc kubenswrapper[4835]: E0201 08:07:31.567380 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:07:32 crc kubenswrapper[4835]: I0201 08:07:32.537655 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:07:33 crc kubenswrapper[4835]: I0201 08:07:33.538777 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:07:33 crc kubenswrapper[4835]: I0201 08:07:33.538884 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:07:33 crc kubenswrapper[4835]: I0201 08:07:33.539661 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"e912da0c0e046b602987517a0d1588b0f8b6e72a8848b6c5352c65880ccfe5af"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 08:07:33 crc kubenswrapper[4835]: I0201 08:07:33.539687 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:07:33 crc kubenswrapper[4835]: I0201 08:07:33.539749 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://e912da0c0e046b602987517a0d1588b0f8b6e72a8848b6c5352c65880ccfe5af" gracePeriod=30 Feb 01 08:07:33 crc kubenswrapper[4835]: I0201 08:07:33.540446 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:07:33 crc kubenswrapper[4835]: I0201 08:07:33.566593 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:07:33 crc kubenswrapper[4835]: I0201 08:07:33.566646 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:07:33 crc kubenswrapper[4835]: E0201 08:07:33.567033 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:07:33 crc kubenswrapper[4835]: E0201 08:07:33.902323 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:07:34 crc kubenswrapper[4835]: I0201 08:07:34.586719 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="e912da0c0e046b602987517a0d1588b0f8b6e72a8848b6c5352c65880ccfe5af" exitCode=0 Feb 01 08:07:34 crc kubenswrapper[4835]: I0201 08:07:34.586939 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"e912da0c0e046b602987517a0d1588b0f8b6e72a8848b6c5352c65880ccfe5af"} Feb 01 08:07:34 crc kubenswrapper[4835]: I0201 08:07:34.587292 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8"} Feb 01 08:07:34 crc kubenswrapper[4835]: I0201 08:07:34.587336 4835 scope.go:117] "RemoveContainer" containerID="7a03cf7c11714cefbc59d4b394b12e40964c5a79e38a0a8769a2275407e1aee9" Feb 01 08:07:34 crc kubenswrapper[4835]: I0201 08:07:34.588058 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:07:34 crc kubenswrapper[4835]: I0201 08:07:34.588497 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:07:34 crc kubenswrapper[4835]: E0201 08:07:34.588870 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:07:35 crc kubenswrapper[4835]: I0201 08:07:35.601645 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:07:35 crc kubenswrapper[4835]: E0201 08:07:35.601965 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:07:36 crc kubenswrapper[4835]: I0201 08:07:36.568363 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:07:36 crc kubenswrapper[4835]: I0201 08:07:36.568940 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:07:36 crc kubenswrapper[4835]: I0201 08:07:36.569135 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:07:36 crc kubenswrapper[4835]: I0201 08:07:36.569392 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:07:36 crc kubenswrapper[4835]: I0201 08:07:36.569687 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:07:36 crc kubenswrapper[4835]: E0201 08:07:36.569706 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:07:36 crc kubenswrapper[4835]: I0201 08:07:36.570166 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:07:36 crc kubenswrapper[4835]: E0201 08:07:36.570810 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:07:39 crc kubenswrapper[4835]: I0201 08:07:39.538325 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:07:42 crc kubenswrapper[4835]: I0201 08:07:42.537238 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:07:42 crc kubenswrapper[4835]: I0201 08:07:42.537939 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:07:42 crc kubenswrapper[4835]: I0201 08:07:42.567691 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:07:42 crc kubenswrapper[4835]: I0201 08:07:42.567857 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:07:42 crc kubenswrapper[4835]: I0201 08:07:42.567922 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:07:42 crc kubenswrapper[4835]: I0201 08:07:42.568062 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:07:42 crc kubenswrapper[4835]: I0201 08:07:42.568084 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:07:42 crc kubenswrapper[4835]: I0201 08:07:42.568183 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:07:42 crc kubenswrapper[4835]: E0201 08:07:42.569015 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:07:45 crc kubenswrapper[4835]: I0201 08:07:45.537925 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:07:45 crc kubenswrapper[4835]: I0201 08:07:45.538893 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:07:45 crc kubenswrapper[4835]: I0201 08:07:45.540213 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 08:07:45 crc kubenswrapper[4835]: I0201 08:07:45.540251 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:07:45 crc kubenswrapper[4835]: I0201 08:07:45.540307 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" gracePeriod=30 Feb 01 08:07:45 crc kubenswrapper[4835]: I0201 08:07:45.541318 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:07:45 crc kubenswrapper[4835]: E0201 08:07:45.667809 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:07:45 crc kubenswrapper[4835]: I0201 08:07:45.700014 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" exitCode=0 Feb 01 08:07:45 crc kubenswrapper[4835]: I0201 08:07:45.700088 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8"} Feb 01 08:07:45 crc kubenswrapper[4835]: I0201 08:07:45.700142 4835 scope.go:117] "RemoveContainer" containerID="e912da0c0e046b602987517a0d1588b0f8b6e72a8848b6c5352c65880ccfe5af" Feb 01 08:07:45 crc kubenswrapper[4835]: I0201 08:07:45.701182 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:07:45 crc kubenswrapper[4835]: I0201 08:07:45.701237 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:07:45 crc kubenswrapper[4835]: E0201 08:07:45.701688 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:07:47 crc kubenswrapper[4835]: I0201 08:07:47.572402 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:07:47 crc kubenswrapper[4835]: I0201 08:07:47.572775 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:07:47 crc kubenswrapper[4835]: E0201 08:07:47.573052 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:07:47 crc kubenswrapper[4835]: I0201 08:07:47.573081 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:07:47 crc kubenswrapper[4835]: I0201 08:07:47.573199 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:07:47 crc kubenswrapper[4835]: I0201 08:07:47.573392 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:07:47 crc kubenswrapper[4835]: E0201 08:07:47.573904 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:07:51 crc kubenswrapper[4835]: I0201 08:07:51.568309 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:07:51 crc kubenswrapper[4835]: I0201 08:07:51.569840 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:07:51 crc kubenswrapper[4835]: I0201 08:07:51.570231 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:07:51 crc kubenswrapper[4835]: E0201 08:07:51.571125 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:07:53 crc kubenswrapper[4835]: I0201 08:07:53.567350 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:07:53 crc kubenswrapper[4835]: I0201 08:07:53.567754 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:07:53 crc kubenswrapper[4835]: I0201 08:07:53.567783 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:07:53 crc kubenswrapper[4835]: I0201 08:07:53.567872 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:07:53 crc kubenswrapper[4835]: I0201 08:07:53.567881 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:07:53 crc kubenswrapper[4835]: I0201 08:07:53.567929 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:07:53 crc kubenswrapper[4835]: E0201 08:07:53.568299 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:07:56 crc kubenswrapper[4835]: I0201 08:07:56.566824 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:07:56 crc kubenswrapper[4835]: I0201 08:07:56.567169 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:07:56 crc kubenswrapper[4835]: E0201 08:07:56.567594 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:07:58 crc kubenswrapper[4835]: I0201 08:07:58.568070 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:07:58 crc kubenswrapper[4835]: I0201 08:07:58.568240 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:07:58 crc kubenswrapper[4835]: I0201 08:07:58.568489 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:07:58 crc kubenswrapper[4835]: E0201 08:07:58.568978 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:07:59 crc kubenswrapper[4835]: I0201 08:07:59.566492 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:07:59 crc kubenswrapper[4835]: I0201 08:07:59.566799 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:07:59 crc kubenswrapper[4835]: E0201 08:07:59.567153 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:08:04 crc kubenswrapper[4835]: I0201 08:08:04.567098 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:08:04 crc kubenswrapper[4835]: I0201 08:08:04.567775 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:08:04 crc kubenswrapper[4835]: I0201 08:08:04.567912 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:08:04 crc kubenswrapper[4835]: E0201 08:08:04.568302 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:08:07 crc kubenswrapper[4835]: I0201 08:08:07.577856 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:08:07 crc kubenswrapper[4835]: I0201 08:08:07.578400 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:08:07 crc kubenswrapper[4835]: I0201 08:08:07.578616 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:08:07 crc kubenswrapper[4835]: I0201 08:08:07.578742 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:08:07 crc kubenswrapper[4835]: I0201 08:08:07.578762 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:08:07 crc kubenswrapper[4835]: I0201 08:08:07.578913 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:08:07 crc kubenswrapper[4835]: E0201 08:08:07.579983 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:08:09 crc kubenswrapper[4835]: I0201 08:08:09.568106 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:08:09 crc kubenswrapper[4835]: I0201 08:08:09.568277 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:08:09 crc kubenswrapper[4835]: I0201 08:08:09.568492 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:08:09 crc kubenswrapper[4835]: E0201 08:08:09.568992 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:08:09 crc kubenswrapper[4835]: I0201 08:08:09.949808 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="92e19c163e2de72bfddfab94aa60f51bee78d43c0a21f8ad5a34915b58f7acf3" exitCode=1 Feb 01 08:08:09 crc kubenswrapper[4835]: I0201 08:08:09.949880 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"92e19c163e2de72bfddfab94aa60f51bee78d43c0a21f8ad5a34915b58f7acf3"} Feb 01 08:08:09 crc kubenswrapper[4835]: I0201 08:08:09.949948 4835 scope.go:117] "RemoveContainer" containerID="b617a357ad18b022ef2b099085b4201aaae89a1fe136b06e63fb522686c13160" Feb 01 08:08:09 crc kubenswrapper[4835]: I0201 08:08:09.951013 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:08:09 crc kubenswrapper[4835]: I0201 08:08:09.951131 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:08:09 crc kubenswrapper[4835]: I0201 08:08:09.951176 4835 scope.go:117] "RemoveContainer" containerID="92e19c163e2de72bfddfab94aa60f51bee78d43c0a21f8ad5a34915b58f7acf3" Feb 01 08:08:09 crc kubenswrapper[4835]: I0201 08:08:09.951313 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:08:09 crc kubenswrapper[4835]: E0201 08:08:09.951964 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:08:10 crc kubenswrapper[4835]: I0201 08:08:10.567241 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:08:10 crc kubenswrapper[4835]: I0201 08:08:10.567268 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:08:10 crc kubenswrapper[4835]: E0201 08:08:10.567488 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:08:11 crc kubenswrapper[4835]: I0201 08:08:11.567276 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:08:11 crc kubenswrapper[4835]: I0201 08:08:11.567318 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:08:11 crc kubenswrapper[4835]: E0201 08:08:11.567803 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:08:18 crc kubenswrapper[4835]: I0201 08:08:18.567744 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:08:18 crc kubenswrapper[4835]: I0201 08:08:18.568526 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:08:18 crc kubenswrapper[4835]: I0201 08:08:18.568704 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:08:18 crc kubenswrapper[4835]: E0201 08:08:18.569298 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:08:21 crc kubenswrapper[4835]: I0201 08:08:21.567554 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:08:21 crc kubenswrapper[4835]: I0201 08:08:21.569449 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:08:21 crc kubenswrapper[4835]: E0201 08:08:21.570136 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:08:22 crc kubenswrapper[4835]: I0201 08:08:22.567864 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:08:22 crc kubenswrapper[4835]: I0201 08:08:22.567905 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:08:22 crc kubenswrapper[4835]: E0201 08:08:22.568259 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:08:22 crc kubenswrapper[4835]: I0201 08:08:22.568401 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:08:22 crc kubenswrapper[4835]: I0201 08:08:22.568576 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:08:22 crc kubenswrapper[4835]: I0201 08:08:22.568622 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:08:22 crc kubenswrapper[4835]: I0201 08:08:22.568719 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:08:22 crc kubenswrapper[4835]: I0201 08:08:22.568733 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:08:22 crc kubenswrapper[4835]: I0201 08:08:22.568796 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:08:22 crc kubenswrapper[4835]: E0201 08:08:22.569346 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:08:23 crc kubenswrapper[4835]: I0201 08:08:23.567820 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:08:23 crc kubenswrapper[4835]: I0201 08:08:23.568295 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:08:23 crc kubenswrapper[4835]: I0201 08:08:23.568340 4835 scope.go:117] "RemoveContainer" containerID="92e19c163e2de72bfddfab94aa60f51bee78d43c0a21f8ad5a34915b58f7acf3" Feb 01 08:08:23 crc kubenswrapper[4835]: I0201 08:08:23.568489 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:08:23 crc kubenswrapper[4835]: E0201 08:08:23.569022 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:08:31 crc kubenswrapper[4835]: I0201 08:08:31.568306 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:08:31 crc kubenswrapper[4835]: I0201 08:08:31.569477 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:08:31 crc kubenswrapper[4835]: I0201 08:08:31.569665 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:08:31 crc kubenswrapper[4835]: E0201 08:08:31.570152 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:08:33 crc kubenswrapper[4835]: I0201 08:08:33.567171 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:08:33 crc kubenswrapper[4835]: I0201 08:08:33.567950 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:08:33 crc kubenswrapper[4835]: I0201 08:08:33.567996 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:08:33 crc kubenswrapper[4835]: I0201 08:08:33.568113 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:08:33 crc kubenswrapper[4835]: I0201 08:08:33.568133 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:08:33 crc kubenswrapper[4835]: I0201 08:08:33.568219 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:08:33 crc kubenswrapper[4835]: E0201 08:08:33.568840 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:08:34 crc kubenswrapper[4835]: I0201 08:08:34.567401 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:08:34 crc kubenswrapper[4835]: I0201 08:08:34.567475 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:08:34 crc kubenswrapper[4835]: E0201 08:08:34.567713 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:08:35 crc kubenswrapper[4835]: I0201 08:08:35.567632 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:08:35 crc kubenswrapper[4835]: I0201 08:08:35.567689 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:08:35 crc kubenswrapper[4835]: E0201 08:08:35.568286 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:08:37 crc kubenswrapper[4835]: I0201 08:08:37.595136 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:08:37 crc kubenswrapper[4835]: I0201 08:08:37.595795 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:08:37 crc kubenswrapper[4835]: I0201 08:08:37.595845 4835 scope.go:117] "RemoveContainer" containerID="92e19c163e2de72bfddfab94aa60f51bee78d43c0a21f8ad5a34915b58f7acf3" Feb 01 08:08:37 crc kubenswrapper[4835]: I0201 08:08:37.595968 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:08:37 crc kubenswrapper[4835]: E0201 08:08:37.839976 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:08:38 crc kubenswrapper[4835]: I0201 08:08:38.226090 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"989717bbba5b6b4ae4b0d1d4f7a61748b7c6f589ae51889c79db71e2de187f8e"} Feb 01 08:08:38 crc kubenswrapper[4835]: I0201 08:08:38.227064 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:08:38 crc kubenswrapper[4835]: I0201 08:08:38.227181 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:08:38 crc kubenswrapper[4835]: I0201 08:08:38.227390 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:08:38 crc kubenswrapper[4835]: E0201 08:08:38.227975 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:08:45 crc kubenswrapper[4835]: I0201 08:08:45.567099 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:08:45 crc kubenswrapper[4835]: I0201 08:08:45.567455 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:08:45 crc kubenswrapper[4835]: E0201 08:08:45.567648 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:08:45 crc kubenswrapper[4835]: I0201 08:08:45.567666 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:08:45 crc kubenswrapper[4835]: I0201 08:08:45.567727 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:08:45 crc kubenswrapper[4835]: I0201 08:08:45.567814 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:08:45 crc kubenswrapper[4835]: I0201 08:08:45.567880 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:08:45 crc kubenswrapper[4835]: I0201 08:08:45.567960 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:08:45 crc kubenswrapper[4835]: I0201 08:08:45.567990 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:08:45 crc kubenswrapper[4835]: I0201 08:08:45.568056 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:08:45 crc kubenswrapper[4835]: I0201 08:08:45.568065 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:08:45 crc kubenswrapper[4835]: E0201 08:08:45.568079 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:08:45 crc kubenswrapper[4835]: I0201 08:08:45.568111 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:08:45 crc kubenswrapper[4835]: E0201 08:08:45.568506 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:08:49 crc kubenswrapper[4835]: I0201 08:08:49.572944 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:08:49 crc kubenswrapper[4835]: I0201 08:08:49.573612 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:08:49 crc kubenswrapper[4835]: E0201 08:08:49.573800 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:08:50 crc kubenswrapper[4835]: I0201 08:08:50.568504 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:08:50 crc kubenswrapper[4835]: I0201 08:08:50.568584 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:08:50 crc kubenswrapper[4835]: I0201 08:08:50.568692 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:08:50 crc kubenswrapper[4835]: E0201 08:08:50.569025 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:08:53 crc kubenswrapper[4835]: I0201 08:08:53.397028 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f" exitCode=1 Feb 01 08:08:53 crc kubenswrapper[4835]: I0201 08:08:53.397088 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f"} Feb 01 08:08:53 crc kubenswrapper[4835]: I0201 08:08:53.397382 4835 scope.go:117] "RemoveContainer" containerID="8e5073f26383eeb4c40644914a83b6b270ec7d095e593a2bfb93470d60b385bd" Feb 01 08:08:53 crc kubenswrapper[4835]: I0201 08:08:53.398091 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:08:53 crc kubenswrapper[4835]: I0201 08:08:53.398167 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:08:53 crc kubenswrapper[4835]: I0201 08:08:53.398263 4835 scope.go:117] "RemoveContainer" containerID="b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f" Feb 01 08:08:53 crc kubenswrapper[4835]: I0201 08:08:53.398283 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:08:53 crc kubenswrapper[4835]: E0201 08:08:53.398630 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.355322 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vwdqc/must-gather-c7xxg"] Feb 01 08:08:54 crc kubenswrapper[4835]: E0201 08:08:54.355985 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebc21e2-e3f2-411b-ade8-2c3137cc286e" containerName="extract-content" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.356004 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebc21e2-e3f2-411b-ade8-2c3137cc286e" containerName="extract-content" Feb 01 08:08:54 crc kubenswrapper[4835]: E0201 08:08:54.356018 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebc21e2-e3f2-411b-ade8-2c3137cc286e" containerName="extract-utilities" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.356025 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebc21e2-e3f2-411b-ade8-2c3137cc286e" containerName="extract-utilities" Feb 01 08:08:54 crc kubenswrapper[4835]: E0201 08:08:54.356044 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebc21e2-e3f2-411b-ade8-2c3137cc286e" containerName="registry-server" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.356051 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebc21e2-e3f2-411b-ade8-2c3137cc286e" containerName="registry-server" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.356200 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bebc21e2-e3f2-411b-ade8-2c3137cc286e" containerName="registry-server" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.357171 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwdqc/must-gather-c7xxg" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.363684 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vwdqc/must-gather-c7xxg"] Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.367765 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vwdqc"/"default-dockercfg-k5f5r" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.371975 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vwdqc"/"openshift-service-ca.crt" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.372393 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vwdqc"/"kube-root-ca.crt" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.441840 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dfdcbe67-d5e0-4882-b2d9-e039513a25f0-must-gather-output\") pod \"must-gather-c7xxg\" (UID: \"dfdcbe67-d5e0-4882-b2d9-e039513a25f0\") " pod="openshift-must-gather-vwdqc/must-gather-c7xxg" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.441940 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2blml\" (UniqueName: \"kubernetes.io/projected/dfdcbe67-d5e0-4882-b2d9-e039513a25f0-kube-api-access-2blml\") pod \"must-gather-c7xxg\" (UID: \"dfdcbe67-d5e0-4882-b2d9-e039513a25f0\") " pod="openshift-must-gather-vwdqc/must-gather-c7xxg" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.543228 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dfdcbe67-d5e0-4882-b2d9-e039513a25f0-must-gather-output\") pod \"must-gather-c7xxg\" (UID: \"dfdcbe67-d5e0-4882-b2d9-e039513a25f0\") " pod="openshift-must-gather-vwdqc/must-gather-c7xxg" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.543635 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2blml\" (UniqueName: \"kubernetes.io/projected/dfdcbe67-d5e0-4882-b2d9-e039513a25f0-kube-api-access-2blml\") pod \"must-gather-c7xxg\" (UID: \"dfdcbe67-d5e0-4882-b2d9-e039513a25f0\") " pod="openshift-must-gather-vwdqc/must-gather-c7xxg" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.543740 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dfdcbe67-d5e0-4882-b2d9-e039513a25f0-must-gather-output\") pod \"must-gather-c7xxg\" (UID: \"dfdcbe67-d5e0-4882-b2d9-e039513a25f0\") " pod="openshift-must-gather-vwdqc/must-gather-c7xxg" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.564350 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2blml\" (UniqueName: \"kubernetes.io/projected/dfdcbe67-d5e0-4882-b2d9-e039513a25f0-kube-api-access-2blml\") pod \"must-gather-c7xxg\" (UID: \"dfdcbe67-d5e0-4882-b2d9-e039513a25f0\") " pod="openshift-must-gather-vwdqc/must-gather-c7xxg" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.678240 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwdqc/must-gather-c7xxg" Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.921211 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vwdqc/must-gather-c7xxg"] Feb 01 08:08:54 crc kubenswrapper[4835]: I0201 08:08:54.936992 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 01 08:08:55 crc kubenswrapper[4835]: I0201 08:08:55.419985 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwdqc/must-gather-c7xxg" event={"ID":"dfdcbe67-d5e0-4882-b2d9-e039513a25f0","Type":"ContainerStarted","Data":"8aeae16b7dc6696e2eb129853e351d55daf6c70724a0e8dd2121c1356b4e3980"} Feb 01 08:08:56 crc kubenswrapper[4835]: I0201 08:08:56.566876 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:08:56 crc kubenswrapper[4835]: I0201 08:08:56.567265 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:08:56 crc kubenswrapper[4835]: I0201 08:08:56.567383 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:08:56 crc kubenswrapper[4835]: E0201 08:08:56.567768 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:08:57 crc kubenswrapper[4835]: I0201 08:08:57.577467 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:08:57 crc kubenswrapper[4835]: I0201 08:08:57.577552 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:08:57 crc kubenswrapper[4835]: I0201 08:08:57.577579 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:08:57 crc kubenswrapper[4835]: I0201 08:08:57.577635 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:08:57 crc kubenswrapper[4835]: I0201 08:08:57.577660 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:08:57 crc kubenswrapper[4835]: I0201 08:08:57.577702 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:08:57 crc kubenswrapper[4835]: E0201 08:08:57.578057 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:08:59 crc kubenswrapper[4835]: I0201 08:08:59.471026 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="3f2186ff77af1c47eb15deb97901f7226557ec5b2ecb431045e2538fb29d941c" exitCode=1 Feb 01 08:08:59 crc kubenswrapper[4835]: I0201 08:08:59.471119 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"3f2186ff77af1c47eb15deb97901f7226557ec5b2ecb431045e2538fb29d941c"} Feb 01 08:08:59 crc kubenswrapper[4835]: I0201 08:08:59.471586 4835 scope.go:117] "RemoveContainer" containerID="7189761382c146038894eae5d5a8aa21ca1dbcfad23c65e4903f28cd18007996" Feb 01 08:08:59 crc kubenswrapper[4835]: I0201 08:08:59.473205 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:08:59 crc kubenswrapper[4835]: I0201 08:08:59.473383 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:08:59 crc kubenswrapper[4835]: I0201 08:08:59.473638 4835 scope.go:117] "RemoveContainer" containerID="3f2186ff77af1c47eb15deb97901f7226557ec5b2ecb431045e2538fb29d941c" Feb 01 08:08:59 crc kubenswrapper[4835]: I0201 08:08:59.473899 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:08:59 crc kubenswrapper[4835]: E0201 08:08:59.475052 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:08:59 crc kubenswrapper[4835]: I0201 08:08:59.481194 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwdqc/must-gather-c7xxg" event={"ID":"dfdcbe67-d5e0-4882-b2d9-e039513a25f0","Type":"ContainerStarted","Data":"275d139ef89b68c8944a866b1f7eaf25618c1648a86d84e9198e1e0ac33871b7"} Feb 01 08:08:59 crc kubenswrapper[4835]: I0201 08:08:59.481279 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwdqc/must-gather-c7xxg" event={"ID":"dfdcbe67-d5e0-4882-b2d9-e039513a25f0","Type":"ContainerStarted","Data":"ff70e5a46efa9a4fc239271d5d64d594dab2c4bc357cd62c2841710559b957e6"} Feb 01 08:08:59 crc kubenswrapper[4835]: I0201 08:08:59.549694 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vwdqc/must-gather-c7xxg" podStartSLOduration=1.8787696569999999 podStartE2EDuration="5.549675495s" podCreationTimestamp="2026-02-01 08:08:54 +0000 UTC" firstStartedPulling="2026-02-01 08:08:54.93680214 +0000 UTC m=+2808.057238574" lastFinishedPulling="2026-02-01 08:08:58.607707928 +0000 UTC m=+2811.728144412" observedRunningTime="2026-02-01 08:08:59.538959646 +0000 UTC m=+2812.659396090" watchObservedRunningTime="2026-02-01 08:08:59.549675495 +0000 UTC m=+2812.670111949" Feb 01 08:09:00 crc kubenswrapper[4835]: I0201 08:09:00.567621 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:09:00 crc kubenswrapper[4835]: I0201 08:09:00.567666 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:09:00 crc kubenswrapper[4835]: E0201 08:09:00.567991 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:09:03 crc kubenswrapper[4835]: I0201 08:09:03.576138 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:09:03 crc kubenswrapper[4835]: E0201 08:09:03.576327 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 08:09:03 crc kubenswrapper[4835]: E0201 08:09:03.577074 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 08:11:05.577060748 +0000 UTC m=+2938.697497182 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 08:09:04 crc kubenswrapper[4835]: I0201 08:09:04.566629 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:09:04 crc kubenswrapper[4835]: I0201 08:09:04.567001 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:09:04 crc kubenswrapper[4835]: E0201 08:09:04.567333 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:09:05 crc kubenswrapper[4835]: I0201 08:09:05.568633 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:09:05 crc kubenswrapper[4835]: I0201 08:09:05.568736 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:09:05 crc kubenswrapper[4835]: I0201 08:09:05.568855 4835 scope.go:117] "RemoveContainer" containerID="b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f" Feb 01 08:09:05 crc kubenswrapper[4835]: I0201 08:09:05.568869 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:09:05 crc kubenswrapper[4835]: E0201 08:09:05.569315 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:09:08 crc kubenswrapper[4835]: I0201 08:09:08.566641 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:09:08 crc kubenswrapper[4835]: I0201 08:09:08.566918 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:09:08 crc kubenswrapper[4835]: I0201 08:09:08.566938 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:09:08 crc kubenswrapper[4835]: I0201 08:09:08.566986 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:09:08 crc kubenswrapper[4835]: I0201 08:09:08.566994 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:09:08 crc kubenswrapper[4835]: I0201 08:09:08.567024 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:09:08 crc kubenswrapper[4835]: E0201 08:09:08.567332 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:09:14 crc kubenswrapper[4835]: I0201 08:09:14.567238 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:09:14 crc kubenswrapper[4835]: I0201 08:09:14.567816 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:09:14 crc kubenswrapper[4835]: I0201 08:09:14.567893 4835 scope.go:117] "RemoveContainer" containerID="3f2186ff77af1c47eb15deb97901f7226557ec5b2ecb431045e2538fb29d941c" Feb 01 08:09:14 crc kubenswrapper[4835]: I0201 08:09:14.567901 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:09:15 crc kubenswrapper[4835]: E0201 08:09:15.059137 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.567638 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.567697 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.567727 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.567756 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:09:15 crc kubenswrapper[4835]: E0201 08:09:15.567966 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:09:15 crc kubenswrapper[4835]: E0201 08:09:15.568090 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.650979 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" exitCode=1 Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.651050 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09"} Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.651091 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63"} Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.651107 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825"} Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.651061 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" exitCode=1 Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.651123 4835 scope.go:117] "RemoveContainer" containerID="710afb6113b62f968cb1ba130a44f7d1ccf3fdf28f8a738dfa7c16de54a59de5" Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.652226 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.652322 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.652451 4835 scope.go:117] "RemoveContainer" containerID="3f2186ff77af1c47eb15deb97901f7226557ec5b2ecb431045e2538fb29d941c" Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.652461 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:09:15 crc kubenswrapper[4835]: E0201 08:09:15.652837 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:09:15 crc kubenswrapper[4835]: I0201 08:09:15.716337 4835 scope.go:117] "RemoveContainer" containerID="0108b7109877c4e3c9d683c7aef6cdc6ee2e4e9f33ae1ab30461b34e423cc541" Feb 01 08:09:16 crc kubenswrapper[4835]: I0201 08:09:16.671263 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" exitCode=1 Feb 01 08:09:16 crc kubenswrapper[4835]: I0201 08:09:16.671285 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09"} Feb 01 08:09:16 crc kubenswrapper[4835]: I0201 08:09:16.671367 4835 scope.go:117] "RemoveContainer" containerID="87f0c0ae4408587465dab8124d3be1db6ad3eccf9e249f7a83e0c575efc39d41" Feb 01 08:09:16 crc kubenswrapper[4835]: I0201 08:09:16.672476 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:09:16 crc kubenswrapper[4835]: I0201 08:09:16.672585 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:09:16 crc kubenswrapper[4835]: I0201 08:09:16.672721 4835 scope.go:117] "RemoveContainer" containerID="3f2186ff77af1c47eb15deb97901f7226557ec5b2ecb431045e2538fb29d941c" Feb 01 08:09:16 crc kubenswrapper[4835]: I0201 08:09:16.672737 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:09:16 crc kubenswrapper[4835]: E0201 08:09:16.673351 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 20s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:09:18 crc kubenswrapper[4835]: E0201 08:09:18.404353 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 08:09:18 crc kubenswrapper[4835]: I0201 08:09:18.567638 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:09:18 crc kubenswrapper[4835]: I0201 08:09:18.567715 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:09:18 crc kubenswrapper[4835]: I0201 08:09:18.567808 4835 scope.go:117] "RemoveContainer" containerID="b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f" Feb 01 08:09:18 crc kubenswrapper[4835]: I0201 08:09:18.567817 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:09:18 crc kubenswrapper[4835]: E0201 08:09:18.568173 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:09:18 crc kubenswrapper[4835]: I0201 08:09:18.704510 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:09:21 crc kubenswrapper[4835]: I0201 08:09:21.567475 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:09:21 crc kubenswrapper[4835]: I0201 08:09:21.567860 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:09:21 crc kubenswrapper[4835]: I0201 08:09:21.567882 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:09:21 crc kubenswrapper[4835]: I0201 08:09:21.567929 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:09:21 crc kubenswrapper[4835]: I0201 08:09:21.567936 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:09:21 crc kubenswrapper[4835]: I0201 08:09:21.567981 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:09:21 crc kubenswrapper[4835]: E0201 08:09:21.568416 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:09:25 crc kubenswrapper[4835]: I0201 08:09:25.191719 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 08:09:25 crc kubenswrapper[4835]: I0201 08:09:25.192356 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 08:09:27 crc kubenswrapper[4835]: I0201 08:09:27.778486 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="989717bbba5b6b4ae4b0d1d4f7a61748b7c6f589ae51889c79db71e2de187f8e" exitCode=1 Feb 01 08:09:27 crc kubenswrapper[4835]: I0201 08:09:27.778553 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"989717bbba5b6b4ae4b0d1d4f7a61748b7c6f589ae51889c79db71e2de187f8e"} Feb 01 08:09:27 crc kubenswrapper[4835]: I0201 08:09:27.779562 4835 scope.go:117] "RemoveContainer" containerID="92e19c163e2de72bfddfab94aa60f51bee78d43c0a21f8ad5a34915b58f7acf3" Feb 01 08:09:27 crc kubenswrapper[4835]: I0201 08:09:27.780259 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:09:27 crc kubenswrapper[4835]: I0201 08:09:27.780340 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:09:27 crc kubenswrapper[4835]: I0201 08:09:27.780371 4835 scope.go:117] "RemoveContainer" containerID="989717bbba5b6b4ae4b0d1d4f7a61748b7c6f589ae51889c79db71e2de187f8e" Feb 01 08:09:27 crc kubenswrapper[4835]: I0201 08:09:27.780492 4835 scope.go:117] "RemoveContainer" containerID="b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f" Feb 01 08:09:27 crc kubenswrapper[4835]: I0201 08:09:27.780506 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:09:28 crc kubenswrapper[4835]: E0201 08:09:28.293532 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.566981 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.567248 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.567337 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.567356 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:09:28 crc kubenswrapper[4835]: E0201 08:09:28.567465 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:09:28 crc kubenswrapper[4835]: E0201 08:09:28.567595 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.568988 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.569290 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.569678 4835 scope.go:117] "RemoveContainer" containerID="3f2186ff77af1c47eb15deb97901f7226557ec5b2ecb431045e2538fb29d941c" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.569837 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:09:28 crc kubenswrapper[4835]: E0201 08:09:28.738948 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.795791 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"3b1bb3af0e5732f220334b3cd370553b1ddcc245875cfa3539320ae4bb4a8f28"} Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.796573 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.796649 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.796769 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:09:28 crc kubenswrapper[4835]: E0201 08:09:28.797185 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.807815 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" exitCode=1 Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.807861 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" exitCode=1 Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.807918 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3"} Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.808002 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df"} Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.808039 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766"} Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.808082 4835 scope.go:117] "RemoveContainer" containerID="5a465aaf9343b727c8c3cffc6ab7d88b0563287319cb1420f3040ea183c2c02e" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.808864 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.808939 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.808966 4835 scope.go:117] "RemoveContainer" containerID="989717bbba5b6b4ae4b0d1d4f7a61748b7c6f589ae51889c79db71e2de187f8e" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.809011 4835 scope.go:117] "RemoveContainer" containerID="b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f" Feb 01 08:09:28 crc kubenswrapper[4835]: E0201 08:09:28.809295 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:09:28 crc kubenswrapper[4835]: I0201 08:09:28.884286 4835 scope.go:117] "RemoveContainer" containerID="83156d80fef436a7d164017e91b2d804248a6eb8ac23ad196ca36658341ce136" Feb 01 08:09:29 crc kubenswrapper[4835]: I0201 08:09:29.821648 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" exitCode=1 Feb 01 08:09:29 crc kubenswrapper[4835]: I0201 08:09:29.821838 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3"} Feb 01 08:09:29 crc kubenswrapper[4835]: I0201 08:09:29.822037 4835 scope.go:117] "RemoveContainer" containerID="3947b82b654a2a4d7188e3173a6522abca1a04140514c5030e77679b089026e8" Feb 01 08:09:29 crc kubenswrapper[4835]: I0201 08:09:29.822729 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:09:29 crc kubenswrapper[4835]: I0201 08:09:29.822825 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:09:29 crc kubenswrapper[4835]: I0201 08:09:29.822865 4835 scope.go:117] "RemoveContainer" containerID="989717bbba5b6b4ae4b0d1d4f7a61748b7c6f589ae51889c79db71e2de187f8e" Feb 01 08:09:29 crc kubenswrapper[4835]: I0201 08:09:29.822939 4835 scope.go:117] "RemoveContainer" containerID="b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f" Feb 01 08:09:29 crc kubenswrapper[4835]: I0201 08:09:29.822949 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:09:29 crc kubenswrapper[4835]: E0201 08:09:29.823482 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:09:32 crc kubenswrapper[4835]: I0201 08:09:32.567675 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:09:32 crc kubenswrapper[4835]: I0201 08:09:32.568340 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:09:32 crc kubenswrapper[4835]: I0201 08:09:32.568368 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:09:32 crc kubenswrapper[4835]: I0201 08:09:32.568488 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:09:32 crc kubenswrapper[4835]: I0201 08:09:32.568498 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:09:32 crc kubenswrapper[4835]: I0201 08:09:32.568539 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:09:32 crc kubenswrapper[4835]: E0201 08:09:32.568888 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:09:39 crc kubenswrapper[4835]: I0201 08:09:39.573928 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:09:39 crc kubenswrapper[4835]: I0201 08:09:39.575449 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:09:39 crc kubenswrapper[4835]: E0201 08:09:39.575962 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.494347 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf_34b15f05-4416-4999-ba8c-3bc64ada7f04/util/0.log" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.566652 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.566681 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.566805 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.566857 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:09:43 crc kubenswrapper[4835]: E0201 08:09:43.566879 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.566960 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:09:43 crc kubenswrapper[4835]: E0201 08:09:43.567164 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.567560 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.567611 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.567633 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.567677 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.567685 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.567715 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:09:43 crc kubenswrapper[4835]: E0201 08:09:43.567979 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.568458 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.568516 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.568537 4835 scope.go:117] "RemoveContainer" containerID="989717bbba5b6b4ae4b0d1d4f7a61748b7c6f589ae51889c79db71e2de187f8e" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.568582 4835 scope.go:117] "RemoveContainer" containerID="b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.568589 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:09:43 crc kubenswrapper[4835]: E0201 08:09:43.568827 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.635465 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf_34b15f05-4416-4999-ba8c-3bc64ada7f04/pull/0.log" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.670866 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf_34b15f05-4416-4999-ba8c-3bc64ada7f04/util/0.log" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.672470 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf_34b15f05-4416-4999-ba8c-3bc64ada7f04/pull/0.log" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.877656 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf_34b15f05-4416-4999-ba8c-3bc64ada7f04/pull/0.log" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.884705 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf_34b15f05-4416-4999-ba8c-3bc64ada7f04/util/0.log" Feb 01 08:09:43 crc kubenswrapper[4835]: I0201 08:09:43.928907 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_55c7a49163ba348c10e2be21119f4ca8799dffa34873699cfe8f8b6d7bkxfxf_34b15f05-4416-4999-ba8c-3bc64ada7f04/extract/0.log" Feb 01 08:09:44 crc kubenswrapper[4835]: I0201 08:09:44.071024 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k_59f26b1b-b8b2-4479-8e35-a7a46c629d35/util/0.log" Feb 01 08:09:44 crc kubenswrapper[4835]: I0201 08:09:44.226152 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k_59f26b1b-b8b2-4479-8e35-a7a46c629d35/pull/0.log" Feb 01 08:09:44 crc kubenswrapper[4835]: I0201 08:09:44.281892 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k_59f26b1b-b8b2-4479-8e35-a7a46c629d35/util/0.log" Feb 01 08:09:44 crc kubenswrapper[4835]: I0201 08:09:44.294491 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k_59f26b1b-b8b2-4479-8e35-a7a46c629d35/pull/0.log" Feb 01 08:09:44 crc kubenswrapper[4835]: I0201 08:09:44.441436 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k_59f26b1b-b8b2-4479-8e35-a7a46c629d35/pull/0.log" Feb 01 08:09:44 crc kubenswrapper[4835]: I0201 08:09:44.477959 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k_59f26b1b-b8b2-4479-8e35-a7a46c629d35/extract/0.log" Feb 01 08:09:44 crc kubenswrapper[4835]: I0201 08:09:44.484659 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590rb29k_59f26b1b-b8b2-4479-8e35-a7a46c629d35/util/0.log" Feb 01 08:09:44 crc kubenswrapper[4835]: I0201 08:09:44.754382 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm_667e6752-afe4-4918-9457-57c5eb1a6aae/util/0.log" Feb 01 08:09:44 crc kubenswrapper[4835]: I0201 08:09:44.888173 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm_667e6752-afe4-4918-9457-57c5eb1a6aae/util/0.log" Feb 01 08:09:44 crc kubenswrapper[4835]: I0201 08:09:44.897622 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm_667e6752-afe4-4918-9457-57c5eb1a6aae/pull/0.log" Feb 01 08:09:44 crc kubenswrapper[4835]: I0201 08:09:44.912126 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm_667e6752-afe4-4918-9457-57c5eb1a6aae/pull/0.log" Feb 01 08:09:45 crc kubenswrapper[4835]: I0201 08:09:45.046826 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm_667e6752-afe4-4918-9457-57c5eb1a6aae/pull/0.log" Feb 01 08:09:45 crc kubenswrapper[4835]: I0201 08:09:45.077797 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm_667e6752-afe4-4918-9457-57c5eb1a6aae/util/0.log" Feb 01 08:09:45 crc kubenswrapper[4835]: I0201 08:09:45.078795 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4efsmxzm_667e6752-afe4-4918-9457-57c5eb1a6aae/extract/0.log" Feb 01 08:09:45 crc kubenswrapper[4835]: I0201 08:09:45.283011 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-index-fmwqp_4fa5ae77-daab-43fa-b798-b9895f717e0a/registry-server/0.log" Feb 01 08:09:45 crc kubenswrapper[4835]: I0201 08:09:45.463765 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4_4326f882-2be0-41a9-b71d-14e811ba9343/util/0.log" Feb 01 08:09:45 crc kubenswrapper[4835]: I0201 08:09:45.682098 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4_4326f882-2be0-41a9-b71d-14e811ba9343/pull/0.log" Feb 01 08:09:45 crc kubenswrapper[4835]: I0201 08:09:45.701341 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4_4326f882-2be0-41a9-b71d-14e811ba9343/pull/0.log" Feb 01 08:09:45 crc kubenswrapper[4835]: I0201 08:09:45.726389 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4_4326f882-2be0-41a9-b71d-14e811ba9343/util/0.log" Feb 01 08:09:45 crc kubenswrapper[4835]: I0201 08:09:45.897954 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4_4326f882-2be0-41a9-b71d-14e811ba9343/pull/0.log" Feb 01 08:09:45 crc kubenswrapper[4835]: I0201 08:09:45.936983 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4_4326f882-2be0-41a9-b71d-14e811ba9343/extract/0.log" Feb 01 08:09:45 crc kubenswrapper[4835]: I0201 08:09:45.987703 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f75766x6z4_4326f882-2be0-41a9-b71d-14e811ba9343/util/0.log" Feb 01 08:09:46 crc kubenswrapper[4835]: I0201 08:09:46.116244 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5_846fe1f2-f96b-4447-9336-d58ac094d486/util/0.log" Feb 01 08:09:46 crc kubenswrapper[4835]: I0201 08:09:46.329668 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5_846fe1f2-f96b-4447-9336-d58ac094d486/pull/0.log" Feb 01 08:09:46 crc kubenswrapper[4835]: I0201 08:09:46.414755 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5_846fe1f2-f96b-4447-9336-d58ac094d486/util/0.log" Feb 01 08:09:46 crc kubenswrapper[4835]: I0201 08:09:46.416941 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5_846fe1f2-f96b-4447-9336-d58ac094d486/pull/0.log" Feb 01 08:09:46 crc kubenswrapper[4835]: I0201 08:09:46.602767 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5_846fe1f2-f96b-4447-9336-d58ac094d486/extract/0.log" Feb 01 08:09:46 crc kubenswrapper[4835]: I0201 08:09:46.620031 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5_846fe1f2-f96b-4447-9336-d58ac094d486/util/0.log" Feb 01 08:09:46 crc kubenswrapper[4835]: I0201 08:09:46.647460 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ea98c876befcc43784f1cf21abccc1fc6ac442ec30e62c8027746c8dc94v8s5_846fe1f2-f96b-4447-9336-d58ac094d486/pull/0.log" Feb 01 08:09:46 crc kubenswrapper[4835]: I0201 08:09:46.805640 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d_147369ac-5553-4aa7-944b-878065951228/util/0.log" Feb 01 08:09:47 crc kubenswrapper[4835]: I0201 08:09:47.017687 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d_147369ac-5553-4aa7-944b-878065951228/util/0.log" Feb 01 08:09:47 crc kubenswrapper[4835]: I0201 08:09:47.019190 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d_147369ac-5553-4aa7-944b-878065951228/pull/0.log" Feb 01 08:09:47 crc kubenswrapper[4835]: I0201 08:09:47.055218 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d_147369ac-5553-4aa7-944b-878065951228/pull/0.log" Feb 01 08:09:47 crc kubenswrapper[4835]: I0201 08:09:47.244676 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d_147369ac-5553-4aa7-944b-878065951228/pull/0.log" Feb 01 08:09:47 crc kubenswrapper[4835]: I0201 08:09:47.251740 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d_147369ac-5553-4aa7-944b-878065951228/util/0.log" Feb 01 08:09:47 crc kubenswrapper[4835]: I0201 08:09:47.272683 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40vq86d_147369ac-5553-4aa7-944b-878065951228/extract/0.log" Feb 01 08:09:47 crc kubenswrapper[4835]: I0201 08:09:47.465497 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6f4d667fdd-rfzbv_aeafdd64-5ab8-429a-9411-bdfe3e0780af/manager/0.log" Feb 01 08:09:47 crc kubenswrapper[4835]: I0201 08:09:47.572769 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-index-x9r54_c754e3d7-d607-4427-b349-b5c22df261ec/registry-server/0.log" Feb 01 08:09:47 crc kubenswrapper[4835]: I0201 08:09:47.666035 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7ddb6bb5f-7x7n4_84eb5c79-bae7-43b3-9b04-c949dc8c5ec4/manager/0.log" Feb 01 08:09:47 crc kubenswrapper[4835]: I0201 08:09:47.805041 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-index-6hv5l_09002d70-8878-4f31-bc75-ddf7378a8564/registry-server/0.log" Feb 01 08:09:47 crc kubenswrapper[4835]: I0201 08:09:47.882558 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5fc7bf5575-vbqwd_73820432-e4ca-45a7-ae9c-77a538ce1d20/manager/0.log" Feb 01 08:09:47 crc kubenswrapper[4835]: I0201 08:09:47.940912 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-854bb59648-nqzs5_2562b9ca-8a8f-4a90-8e8f-fd3e4b235603/manager/0.log" Feb 01 08:09:48 crc kubenswrapper[4835]: I0201 08:09:48.062300 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-index-hgssn_bc494048-8b2c-4d2e-925e-8b1b779dab89/registry-server/0.log" Feb 01 08:09:48 crc kubenswrapper[4835]: I0201 08:09:48.081964 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-779fc9694b-fhcz9_b76bd603-252c-4c26-a1c7-0009be5661be/operator/0.log" Feb 01 08:09:48 crc kubenswrapper[4835]: I0201 08:09:48.170083 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-index-nztp8_be408dba-dcbf-40e4-9b83-cd67424ad82d/registry-server/0.log" Feb 01 08:09:48 crc kubenswrapper[4835]: I0201 08:09:48.294287 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-7b5bf4689c-j4d4r_26de1ab5-eb0d-4fe4-83ad-25f2262bd958/manager/0.log" Feb 01 08:09:48 crc kubenswrapper[4835]: I0201 08:09:48.339190 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-index-tj2nn_ebf9c948-3fde-47f0-aa35-856193c1a275/registry-server/0.log" Feb 01 08:09:54 crc kubenswrapper[4835]: I0201 08:09:54.567267 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:09:54 crc kubenswrapper[4835]: I0201 08:09:54.567758 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:09:54 crc kubenswrapper[4835]: E0201 08:09:54.762113 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:09:55 crc kubenswrapper[4835]: I0201 08:09:55.192167 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 08:09:55 crc kubenswrapper[4835]: I0201 08:09:55.192229 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 08:09:55 crc kubenswrapper[4835]: I0201 08:09:55.695570 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92"} Feb 01 08:09:55 crc kubenswrapper[4835]: I0201 08:09:55.695849 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:09:55 crc kubenswrapper[4835]: I0201 08:09:55.696129 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:09:55 crc kubenswrapper[4835]: E0201 08:09:55.696301 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:09:56 crc kubenswrapper[4835]: I0201 08:09:56.567971 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:09:56 crc kubenswrapper[4835]: I0201 08:09:56.568333 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:09:56 crc kubenswrapper[4835]: I0201 08:09:56.568363 4835 scope.go:117] "RemoveContainer" containerID="989717bbba5b6b4ae4b0d1d4f7a61748b7c6f589ae51889c79db71e2de187f8e" Feb 01 08:09:56 crc kubenswrapper[4835]: I0201 08:09:56.568451 4835 scope.go:117] "RemoveContainer" containerID="b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f" Feb 01 08:09:56 crc kubenswrapper[4835]: I0201 08:09:56.568460 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:09:56 crc kubenswrapper[4835]: E0201 08:09:56.568891 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:09:56 crc kubenswrapper[4835]: I0201 08:09:56.706896 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" exitCode=1 Feb 01 08:09:56 crc kubenswrapper[4835]: I0201 08:09:56.706937 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92"} Feb 01 08:09:56 crc kubenswrapper[4835]: I0201 08:09:56.706971 4835 scope.go:117] "RemoveContainer" containerID="1830a3f8621f68d77a13ee69b5cbfa87a203cf2764bc42c76e3bb5d1e903ef61" Feb 01 08:09:56 crc kubenswrapper[4835]: I0201 08:09:56.707701 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:09:56 crc kubenswrapper[4835]: I0201 08:09:56.707736 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:09:56 crc kubenswrapper[4835]: E0201 08:09:56.708106 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.535025 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.575208 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.575282 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.575386 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:09:57 crc kubenswrapper[4835]: E0201 08:09:57.575729 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.575855 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.575877 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.577900 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.577970 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.577998 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.578100 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.578115 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.578158 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:09:57 crc kubenswrapper[4835]: E0201 08:09:57.578520 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.720048 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:09:57 crc kubenswrapper[4835]: I0201 08:09:57.720073 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:09:57 crc kubenswrapper[4835]: E0201 08:09:57.720373 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:09:57 crc kubenswrapper[4835]: E0201 08:09:57.756305 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:09:58 crc kubenswrapper[4835]: I0201 08:09:58.733080 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" exitCode=1 Feb 01 08:09:58 crc kubenswrapper[4835]: I0201 08:09:58.733218 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716"} Feb 01 08:09:58 crc kubenswrapper[4835]: I0201 08:09:58.733324 4835 scope.go:117] "RemoveContainer" containerID="c5ef2fac74203056d56d0f2c2807904f4ec65c882bc7371b2cb8c90b5a97f2ec" Feb 01 08:09:58 crc kubenswrapper[4835]: I0201 08:09:58.733929 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:09:58 crc kubenswrapper[4835]: I0201 08:09:58.733958 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:09:58 crc kubenswrapper[4835]: I0201 08:09:58.734071 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:09:58 crc kubenswrapper[4835]: I0201 08:09:58.734103 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:09:58 crc kubenswrapper[4835]: E0201 08:09:58.734237 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:09:58 crc kubenswrapper[4835]: E0201 08:09:58.734849 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:10:00 crc kubenswrapper[4835]: I0201 08:10:00.019013 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:10:00 crc kubenswrapper[4835]: I0201 08:10:00.019745 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:10:00 crc kubenswrapper[4835]: I0201 08:10:00.019764 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:10:00 crc kubenswrapper[4835]: E0201 08:10:00.020027 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:10:01 crc kubenswrapper[4835]: I0201 08:10:01.019702 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:10:01 crc kubenswrapper[4835]: I0201 08:10:01.021765 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:10:01 crc kubenswrapper[4835]: I0201 08:10:01.021927 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:10:01 crc kubenswrapper[4835]: E0201 08:10:01.022358 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:10:02 crc kubenswrapper[4835]: I0201 08:10:02.340206 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w6xsk"] Feb 01 08:10:02 crc kubenswrapper[4835]: I0201 08:10:02.342209 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:02 crc kubenswrapper[4835]: I0201 08:10:02.364225 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w6xsk"] Feb 01 08:10:02 crc kubenswrapper[4835]: I0201 08:10:02.420335 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/655042b7-c713-4116-b191-f8e9c03ac3b0-utilities\") pod \"certified-operators-w6xsk\" (UID: \"655042b7-c713-4116-b191-f8e9c03ac3b0\") " pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:02 crc kubenswrapper[4835]: I0201 08:10:02.420459 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/655042b7-c713-4116-b191-f8e9c03ac3b0-catalog-content\") pod \"certified-operators-w6xsk\" (UID: \"655042b7-c713-4116-b191-f8e9c03ac3b0\") " pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:02 crc kubenswrapper[4835]: I0201 08:10:02.420631 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kz84\" (UniqueName: \"kubernetes.io/projected/655042b7-c713-4116-b191-f8e9c03ac3b0-kube-api-access-6kz84\") pod \"certified-operators-w6xsk\" (UID: \"655042b7-c713-4116-b191-f8e9c03ac3b0\") " pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:02 crc kubenswrapper[4835]: I0201 08:10:02.522283 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/655042b7-c713-4116-b191-f8e9c03ac3b0-utilities\") pod \"certified-operators-w6xsk\" (UID: \"655042b7-c713-4116-b191-f8e9c03ac3b0\") " pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:02 crc kubenswrapper[4835]: I0201 08:10:02.522341 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/655042b7-c713-4116-b191-f8e9c03ac3b0-catalog-content\") pod \"certified-operators-w6xsk\" (UID: \"655042b7-c713-4116-b191-f8e9c03ac3b0\") " pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:02 crc kubenswrapper[4835]: I0201 08:10:02.522386 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kz84\" (UniqueName: \"kubernetes.io/projected/655042b7-c713-4116-b191-f8e9c03ac3b0-kube-api-access-6kz84\") pod \"certified-operators-w6xsk\" (UID: \"655042b7-c713-4116-b191-f8e9c03ac3b0\") " pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:02 crc kubenswrapper[4835]: I0201 08:10:02.522861 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/655042b7-c713-4116-b191-f8e9c03ac3b0-catalog-content\") pod \"certified-operators-w6xsk\" (UID: \"655042b7-c713-4116-b191-f8e9c03ac3b0\") " pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:02 crc kubenswrapper[4835]: I0201 08:10:02.523096 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/655042b7-c713-4116-b191-f8e9c03ac3b0-utilities\") pod \"certified-operators-w6xsk\" (UID: \"655042b7-c713-4116-b191-f8e9c03ac3b0\") " pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:02 crc kubenswrapper[4835]: I0201 08:10:02.540340 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kz84\" (UniqueName: \"kubernetes.io/projected/655042b7-c713-4116-b191-f8e9c03ac3b0-kube-api-access-6kz84\") pod \"certified-operators-w6xsk\" (UID: \"655042b7-c713-4116-b191-f8e9c03ac3b0\") " pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:02 crc kubenswrapper[4835]: I0201 08:10:02.662429 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:03 crc kubenswrapper[4835]: I0201 08:10:03.150638 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w6xsk"] Feb 01 08:10:03 crc kubenswrapper[4835]: I0201 08:10:03.783136 4835 generic.go:334] "Generic (PLEG): container finished" podID="655042b7-c713-4116-b191-f8e9c03ac3b0" containerID="b0ccb6fe0ff27f1d48145d3258c6290a9942bc9acb0ecadafb24a87fc0e5fead" exitCode=0 Feb 01 08:10:03 crc kubenswrapper[4835]: I0201 08:10:03.783217 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6xsk" event={"ID":"655042b7-c713-4116-b191-f8e9c03ac3b0","Type":"ContainerDied","Data":"b0ccb6fe0ff27f1d48145d3258c6290a9942bc9acb0ecadafb24a87fc0e5fead"} Feb 01 08:10:03 crc kubenswrapper[4835]: I0201 08:10:03.783525 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6xsk" event={"ID":"655042b7-c713-4116-b191-f8e9c03ac3b0","Type":"ContainerStarted","Data":"83bf7050a3928eff21dd7ca58ac3da1b5dd3eefee3c5bca8bc925f193a4e6dc0"} Feb 01 08:10:04 crc kubenswrapper[4835]: I0201 08:10:04.362120 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-ngjw6_a67dd2fd-8463-4887-94b7-405df03c5c0a/control-plane-machine-set-operator/0.log" Feb 01 08:10:04 crc kubenswrapper[4835]: I0201 08:10:04.529820 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-whqd4_8924e4db-3c47-4e66-90d1-e74e49f3a65d/kube-rbac-proxy/0.log" Feb 01 08:10:04 crc kubenswrapper[4835]: I0201 08:10:04.550885 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-whqd4_8924e4db-3c47-4e66-90d1-e74e49f3a65d/machine-api-operator/0.log" Feb 01 08:10:04 crc kubenswrapper[4835]: I0201 08:10:04.791448 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6xsk" event={"ID":"655042b7-c713-4116-b191-f8e9c03ac3b0","Type":"ContainerStarted","Data":"19128cba4fc197f9569bd2f992d5a53fb687946a845175629a2b4964fee74452"} Feb 01 08:10:05 crc kubenswrapper[4835]: I0201 08:10:05.801482 4835 generic.go:334] "Generic (PLEG): container finished" podID="655042b7-c713-4116-b191-f8e9c03ac3b0" containerID="19128cba4fc197f9569bd2f992d5a53fb687946a845175629a2b4964fee74452" exitCode=0 Feb 01 08:10:05 crc kubenswrapper[4835]: I0201 08:10:05.801518 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6xsk" event={"ID":"655042b7-c713-4116-b191-f8e9c03ac3b0","Type":"ContainerDied","Data":"19128cba4fc197f9569bd2f992d5a53fb687946a845175629a2b4964fee74452"} Feb 01 08:10:06 crc kubenswrapper[4835]: I0201 08:10:06.811190 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6xsk" event={"ID":"655042b7-c713-4116-b191-f8e9c03ac3b0","Type":"ContainerStarted","Data":"e105c1a45fa47a72deb7d979cef2ebf106281ceb52024e82a9fb011fe4c62aa4"} Feb 01 08:10:06 crc kubenswrapper[4835]: I0201 08:10:06.828809 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w6xsk" podStartSLOduration=2.40124141 podStartE2EDuration="4.828790214s" podCreationTimestamp="2026-02-01 08:10:02 +0000 UTC" firstStartedPulling="2026-02-01 08:10:03.78481161 +0000 UTC m=+2876.905248044" lastFinishedPulling="2026-02-01 08:10:06.212360424 +0000 UTC m=+2879.332796848" observedRunningTime="2026-02-01 08:10:06.825849117 +0000 UTC m=+2879.946285551" watchObservedRunningTime="2026-02-01 08:10:06.828790214 +0000 UTC m=+2879.949226658" Feb 01 08:10:07 crc kubenswrapper[4835]: I0201 08:10:07.573430 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:10:07 crc kubenswrapper[4835]: I0201 08:10:07.574058 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:10:07 crc kubenswrapper[4835]: I0201 08:10:07.574149 4835 scope.go:117] "RemoveContainer" containerID="989717bbba5b6b4ae4b0d1d4f7a61748b7c6f589ae51889c79db71e2de187f8e" Feb 01 08:10:07 crc kubenswrapper[4835]: I0201 08:10:07.574280 4835 scope.go:117] "RemoveContainer" containerID="b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f" Feb 01 08:10:07 crc kubenswrapper[4835]: I0201 08:10:07.574345 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:10:07 crc kubenswrapper[4835]: E0201 08:10:07.752835 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:10:07 crc kubenswrapper[4835]: I0201 08:10:07.824482 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"82222831abee73ef6e11850e6eb3e04c17234ab7afe7bc2f282c29b15fca97d1"} Feb 01 08:10:07 crc kubenswrapper[4835]: I0201 08:10:07.825461 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:10:07 crc kubenswrapper[4835]: I0201 08:10:07.825518 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:10:07 crc kubenswrapper[4835]: I0201 08:10:07.825597 4835 scope.go:117] "RemoveContainer" containerID="b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f" Feb 01 08:10:07 crc kubenswrapper[4835]: I0201 08:10:07.825610 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:10:07 crc kubenswrapper[4835]: E0201 08:10:07.825896 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=object-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:10:08 crc kubenswrapper[4835]: I0201 08:10:08.566829 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:10:08 crc kubenswrapper[4835]: I0201 08:10:08.567122 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:10:08 crc kubenswrapper[4835]: I0201 08:10:08.567146 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:10:08 crc kubenswrapper[4835]: I0201 08:10:08.567188 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:10:08 crc kubenswrapper[4835]: I0201 08:10:08.567195 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:10:08 crc kubenswrapper[4835]: I0201 08:10:08.567225 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:10:08 crc kubenswrapper[4835]: I0201 08:10:08.841387 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab"} Feb 01 08:10:09 crc kubenswrapper[4835]: E0201 08:10:09.251849 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.853769 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" exitCode=1 Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.853976 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" exitCode=1 Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.853986 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" exitCode=1 Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.853992 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" exitCode=1 Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.853990 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab"} Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.854037 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07"} Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.854049 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456"} Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.854060 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb"} Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.854076 4835 scope.go:117] "RemoveContainer" containerID="40473b53367a96b571f8b754073bb6267f10d47f936132f7c7217cdd2d71a97c" Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.861321 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.861568 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.861598 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.861644 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.861650 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.861687 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:10:09 crc kubenswrapper[4835]: E0201 08:10:09.861990 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.920775 4835 scope.go:117] "RemoveContainer" containerID="562b4c46055f8f95e1431cd27dd7c8eddc18a8560efad0b0be6eab6c830f7763" Feb 01 08:10:09 crc kubenswrapper[4835]: I0201 08:10:09.998602 4835 scope.go:117] "RemoveContainer" containerID="9bca3aa49f0dc4bc85bb9089b364bc7326ab314d14336649ea6afa25dcba8a2d" Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.056261 4835 scope.go:117] "RemoveContainer" containerID="a205c87b76b92d9109615950f2839cc1d714fadb0b64182ce7c54a49eb3242cd" Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.867917 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="a173a7d4dfce7a09af6df1da942081f7f4d13b9bb491a5259c66bbecc01f055e" exitCode=1 Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.867993 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"a173a7d4dfce7a09af6df1da942081f7f4d13b9bb491a5259c66bbecc01f055e"} Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.868211 4835 scope.go:117] "RemoveContainer" containerID="8bcb519d1f2da511243e672a8e26b9d46f7b5e77272716a991042bab6a914d4d" Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.868846 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.868897 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.868918 4835 scope.go:117] "RemoveContainer" containerID="a173a7d4dfce7a09af6df1da942081f7f4d13b9bb491a5259c66bbecc01f055e" Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.868986 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:10:10 crc kubenswrapper[4835]: E0201 08:10:10.869249 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.875881 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.875958 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.875979 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.876022 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.876029 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:10:10 crc kubenswrapper[4835]: I0201 08:10:10.876059 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:10:10 crc kubenswrapper[4835]: E0201 08:10:10.876498 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:10:12 crc kubenswrapper[4835]: I0201 08:10:12.567260 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:10:12 crc kubenswrapper[4835]: I0201 08:10:12.567626 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:10:12 crc kubenswrapper[4835]: I0201 08:10:12.567724 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:10:12 crc kubenswrapper[4835]: I0201 08:10:12.567750 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:10:12 crc kubenswrapper[4835]: E0201 08:10:12.567960 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:10:12 crc kubenswrapper[4835]: E0201 08:10:12.568001 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:10:12 crc kubenswrapper[4835]: I0201 08:10:12.663351 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:12 crc kubenswrapper[4835]: I0201 08:10:12.663652 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:12 crc kubenswrapper[4835]: I0201 08:10:12.711974 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:12 crc kubenswrapper[4835]: I0201 08:10:12.970956 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:13 crc kubenswrapper[4835]: I0201 08:10:13.018938 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w6xsk"] Feb 01 08:10:14 crc kubenswrapper[4835]: I0201 08:10:14.917594 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w6xsk" podUID="655042b7-c713-4116-b191-f8e9c03ac3b0" containerName="registry-server" containerID="cri-o://e105c1a45fa47a72deb7d979cef2ebf106281ceb52024e82a9fb011fe4c62aa4" gracePeriod=2 Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.306044 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.403499 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/655042b7-c713-4116-b191-f8e9c03ac3b0-catalog-content\") pod \"655042b7-c713-4116-b191-f8e9c03ac3b0\" (UID: \"655042b7-c713-4116-b191-f8e9c03ac3b0\") " Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.403671 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kz84\" (UniqueName: \"kubernetes.io/projected/655042b7-c713-4116-b191-f8e9c03ac3b0-kube-api-access-6kz84\") pod \"655042b7-c713-4116-b191-f8e9c03ac3b0\" (UID: \"655042b7-c713-4116-b191-f8e9c03ac3b0\") " Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.403716 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/655042b7-c713-4116-b191-f8e9c03ac3b0-utilities\") pod \"655042b7-c713-4116-b191-f8e9c03ac3b0\" (UID: \"655042b7-c713-4116-b191-f8e9c03ac3b0\") " Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.404625 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/655042b7-c713-4116-b191-f8e9c03ac3b0-utilities" (OuterVolumeSpecName: "utilities") pod "655042b7-c713-4116-b191-f8e9c03ac3b0" (UID: "655042b7-c713-4116-b191-f8e9c03ac3b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.409166 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/655042b7-c713-4116-b191-f8e9c03ac3b0-kube-api-access-6kz84" (OuterVolumeSpecName: "kube-api-access-6kz84") pod "655042b7-c713-4116-b191-f8e9c03ac3b0" (UID: "655042b7-c713-4116-b191-f8e9c03ac3b0"). InnerVolumeSpecName "kube-api-access-6kz84". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.457041 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/655042b7-c713-4116-b191-f8e9c03ac3b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "655042b7-c713-4116-b191-f8e9c03ac3b0" (UID: "655042b7-c713-4116-b191-f8e9c03ac3b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.505639 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/655042b7-c713-4116-b191-f8e9c03ac3b0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.505685 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kz84\" (UniqueName: \"kubernetes.io/projected/655042b7-c713-4116-b191-f8e9c03ac3b0-kube-api-access-6kz84\") on node \"crc\" DevicePath \"\"" Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.505701 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/655042b7-c713-4116-b191-f8e9c03ac3b0-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.927154 4835 generic.go:334] "Generic (PLEG): container finished" podID="655042b7-c713-4116-b191-f8e9c03ac3b0" containerID="e105c1a45fa47a72deb7d979cef2ebf106281ceb52024e82a9fb011fe4c62aa4" exitCode=0 Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.927198 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6xsk" event={"ID":"655042b7-c713-4116-b191-f8e9c03ac3b0","Type":"ContainerDied","Data":"e105c1a45fa47a72deb7d979cef2ebf106281ceb52024e82a9fb011fe4c62aa4"} Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.927206 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6xsk" Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.927227 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6xsk" event={"ID":"655042b7-c713-4116-b191-f8e9c03ac3b0","Type":"ContainerDied","Data":"83bf7050a3928eff21dd7ca58ac3da1b5dd3eefee3c5bca8bc925f193a4e6dc0"} Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.927248 4835 scope.go:117] "RemoveContainer" containerID="e105c1a45fa47a72deb7d979cef2ebf106281ceb52024e82a9fb011fe4c62aa4" Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.950694 4835 scope.go:117] "RemoveContainer" containerID="19128cba4fc197f9569bd2f992d5a53fb687946a845175629a2b4964fee74452" Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.956476 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w6xsk"] Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.983157 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w6xsk"] Feb 01 08:10:15 crc kubenswrapper[4835]: I0201 08:10:15.983339 4835 scope.go:117] "RemoveContainer" containerID="b0ccb6fe0ff27f1d48145d3258c6290a9942bc9acb0ecadafb24a87fc0e5fead" Feb 01 08:10:16 crc kubenswrapper[4835]: I0201 08:10:16.020104 4835 scope.go:117] "RemoveContainer" containerID="e105c1a45fa47a72deb7d979cef2ebf106281ceb52024e82a9fb011fe4c62aa4" Feb 01 08:10:16 crc kubenswrapper[4835]: E0201 08:10:16.020555 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e105c1a45fa47a72deb7d979cef2ebf106281ceb52024e82a9fb011fe4c62aa4\": container with ID starting with e105c1a45fa47a72deb7d979cef2ebf106281ceb52024e82a9fb011fe4c62aa4 not found: ID does not exist" containerID="e105c1a45fa47a72deb7d979cef2ebf106281ceb52024e82a9fb011fe4c62aa4" Feb 01 08:10:16 crc kubenswrapper[4835]: I0201 08:10:16.020595 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e105c1a45fa47a72deb7d979cef2ebf106281ceb52024e82a9fb011fe4c62aa4"} err="failed to get container status \"e105c1a45fa47a72deb7d979cef2ebf106281ceb52024e82a9fb011fe4c62aa4\": rpc error: code = NotFound desc = could not find container \"e105c1a45fa47a72deb7d979cef2ebf106281ceb52024e82a9fb011fe4c62aa4\": container with ID starting with e105c1a45fa47a72deb7d979cef2ebf106281ceb52024e82a9fb011fe4c62aa4 not found: ID does not exist" Feb 01 08:10:16 crc kubenswrapper[4835]: I0201 08:10:16.020623 4835 scope.go:117] "RemoveContainer" containerID="19128cba4fc197f9569bd2f992d5a53fb687946a845175629a2b4964fee74452" Feb 01 08:10:16 crc kubenswrapper[4835]: E0201 08:10:16.021046 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19128cba4fc197f9569bd2f992d5a53fb687946a845175629a2b4964fee74452\": container with ID starting with 19128cba4fc197f9569bd2f992d5a53fb687946a845175629a2b4964fee74452 not found: ID does not exist" containerID="19128cba4fc197f9569bd2f992d5a53fb687946a845175629a2b4964fee74452" Feb 01 08:10:16 crc kubenswrapper[4835]: I0201 08:10:16.021165 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19128cba4fc197f9569bd2f992d5a53fb687946a845175629a2b4964fee74452"} err="failed to get container status \"19128cba4fc197f9569bd2f992d5a53fb687946a845175629a2b4964fee74452\": rpc error: code = NotFound desc = could not find container \"19128cba4fc197f9569bd2f992d5a53fb687946a845175629a2b4964fee74452\": container with ID starting with 19128cba4fc197f9569bd2f992d5a53fb687946a845175629a2b4964fee74452 not found: ID does not exist" Feb 01 08:10:16 crc kubenswrapper[4835]: I0201 08:10:16.021238 4835 scope.go:117] "RemoveContainer" containerID="b0ccb6fe0ff27f1d48145d3258c6290a9942bc9acb0ecadafb24a87fc0e5fead" Feb 01 08:10:16 crc kubenswrapper[4835]: E0201 08:10:16.021807 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0ccb6fe0ff27f1d48145d3258c6290a9942bc9acb0ecadafb24a87fc0e5fead\": container with ID starting with b0ccb6fe0ff27f1d48145d3258c6290a9942bc9acb0ecadafb24a87fc0e5fead not found: ID does not exist" containerID="b0ccb6fe0ff27f1d48145d3258c6290a9942bc9acb0ecadafb24a87fc0e5fead" Feb 01 08:10:16 crc kubenswrapper[4835]: I0201 08:10:16.021886 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0ccb6fe0ff27f1d48145d3258c6290a9942bc9acb0ecadafb24a87fc0e5fead"} err="failed to get container status \"b0ccb6fe0ff27f1d48145d3258c6290a9942bc9acb0ecadafb24a87fc0e5fead\": rpc error: code = NotFound desc = could not find container \"b0ccb6fe0ff27f1d48145d3258c6290a9942bc9acb0ecadafb24a87fc0e5fead\": container with ID starting with b0ccb6fe0ff27f1d48145d3258c6290a9942bc9acb0ecadafb24a87fc0e5fead not found: ID does not exist" Feb 01 08:10:17 crc kubenswrapper[4835]: I0201 08:10:17.578211 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="655042b7-c713-4116-b191-f8e9c03ac3b0" path="/var/lib/kubelet/pods/655042b7-c713-4116-b191-f8e9c03ac3b0/volumes" Feb 01 08:10:20 crc kubenswrapper[4835]: I0201 08:10:20.567593 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:10:20 crc kubenswrapper[4835]: I0201 08:10:20.568431 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:10:20 crc kubenswrapper[4835]: I0201 08:10:20.568540 4835 scope.go:117] "RemoveContainer" containerID="b20f878fd8d5a96f7ffaecf16eba4be492504f81276eb5a94beefb916ebfaa3f" Feb 01 08:10:20 crc kubenswrapper[4835]: I0201 08:10:20.568550 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:10:20 crc kubenswrapper[4835]: E0201 08:10:20.781268 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:10:20 crc kubenswrapper[4835]: I0201 08:10:20.968308 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"4c18a6c0ad7fc9f3254096d7bfa007b9115d0360f41fd74b092f41a03c6d622a"} Feb 01 08:10:20 crc kubenswrapper[4835]: I0201 08:10:20.969591 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:10:20 crc kubenswrapper[4835]: I0201 08:10:20.969698 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:10:20 crc kubenswrapper[4835]: I0201 08:10:20.969862 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:10:20 crc kubenswrapper[4835]: E0201 08:10:20.970323 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:10:21 crc kubenswrapper[4835]: I0201 08:10:21.567106 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:10:21 crc kubenswrapper[4835]: I0201 08:10:21.567193 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:10:21 crc kubenswrapper[4835]: I0201 08:10:21.567223 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:10:21 crc kubenswrapper[4835]: I0201 08:10:21.567288 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:10:21 crc kubenswrapper[4835]: I0201 08:10:21.567296 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:10:21 crc kubenswrapper[4835]: I0201 08:10:21.567340 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:10:21 crc kubenswrapper[4835]: E0201 08:10:21.567714 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:10:23 crc kubenswrapper[4835]: I0201 08:10:23.566912 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:10:23 crc kubenswrapper[4835]: I0201 08:10:23.567273 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:10:23 crc kubenswrapper[4835]: E0201 08:10:23.567548 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:10:24 crc kubenswrapper[4835]: I0201 08:10:24.567202 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:10:24 crc kubenswrapper[4835]: I0201 08:10:24.567654 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:10:24 crc kubenswrapper[4835]: E0201 08:10:24.568865 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:10:25 crc kubenswrapper[4835]: I0201 08:10:25.191723 4835 patch_prober.go:28] interesting pod/machine-config-daemon-wdt78 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 01 08:10:25 crc kubenswrapper[4835]: I0201 08:10:25.191815 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 01 08:10:25 crc kubenswrapper[4835]: I0201 08:10:25.191882 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" Feb 01 08:10:25 crc kubenswrapper[4835]: I0201 08:10:25.192940 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df"} pod="openshift-machine-config-operator/machine-config-daemon-wdt78" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 01 08:10:25 crc kubenswrapper[4835]: I0201 08:10:25.193041 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" containerName="machine-config-daemon" containerID="cri-o://5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" gracePeriod=600 Feb 01 08:10:25 crc kubenswrapper[4835]: E0201 08:10:25.322447 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:10:25 crc kubenswrapper[4835]: I0201 08:10:25.567064 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:10:25 crc kubenswrapper[4835]: I0201 08:10:25.567537 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:10:25 crc kubenswrapper[4835]: I0201 08:10:25.567579 4835 scope.go:117] "RemoveContainer" containerID="a173a7d4dfce7a09af6df1da942081f7f4d13b9bb491a5259c66bbecc01f055e" Feb 01 08:10:25 crc kubenswrapper[4835]: I0201 08:10:25.567683 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:10:25 crc kubenswrapper[4835]: E0201 08:10:25.568154 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:10:26 crc kubenswrapper[4835]: I0201 08:10:26.006196 4835 generic.go:334] "Generic (PLEG): container finished" podID="303c450e-4b2d-4908-84e6-df8b444ed640" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" exitCode=0 Feb 01 08:10:26 crc kubenswrapper[4835]: I0201 08:10:26.006244 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" event={"ID":"303c450e-4b2d-4908-84e6-df8b444ed640","Type":"ContainerDied","Data":"5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df"} Feb 01 08:10:26 crc kubenswrapper[4835]: I0201 08:10:26.006275 4835 scope.go:117] "RemoveContainer" containerID="946bdd545dcf0243e8d2cbdd7bcdfb0181a2c4c626eff76dbf1ecf3e068ec549" Feb 01 08:10:26 crc kubenswrapper[4835]: I0201 08:10:26.007328 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:10:26 crc kubenswrapper[4835]: E0201 08:10:26.007842 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:10:33 crc kubenswrapper[4835]: I0201 08:10:33.567992 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:10:33 crc kubenswrapper[4835]: I0201 08:10:33.568431 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:10:33 crc kubenswrapper[4835]: I0201 08:10:33.568464 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:10:33 crc kubenswrapper[4835]: I0201 08:10:33.568532 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:10:33 crc kubenswrapper[4835]: I0201 08:10:33.568543 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:10:33 crc kubenswrapper[4835]: I0201 08:10:33.568589 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:10:33 crc kubenswrapper[4835]: E0201 08:10:33.569015 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:10:33 crc kubenswrapper[4835]: I0201 08:10:33.947072 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-6qvjg_86105024-7ff9-4d38-9333-c7c7b241a5c5/kube-rbac-proxy/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.009201 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-6qvjg_86105024-7ff9-4d38-9333-c7c7b241a5c5/controller/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.125320 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/cp-frr-files/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.285461 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/cp-frr-files/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.300872 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/cp-reloader/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.322297 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/cp-reloader/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.325876 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/cp-metrics/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.595400 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/cp-metrics/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.596258 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/cp-reloader/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.596867 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/cp-frr-files/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.637358 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/cp-metrics/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.788062 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/cp-metrics/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.816385 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/cp-frr-files/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.865324 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/controller/0.log" Feb 01 08:10:34 crc kubenswrapper[4835]: I0201 08:10:34.879108 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/cp-reloader/0.log" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.053331 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/frr-metrics/0.log" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.077584 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/kube-rbac-proxy/0.log" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.123094 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/kube-rbac-proxy-frr/0.log" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.288928 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/reloader/0.log" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.304481 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9qwwp_5c427241-76d6-4772-9a78-74952bdbf29f/frr/0.log" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.372139 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-7ldwd_e60f3db5-acc8-404c-a98c-6e6bfb05d6e9/frr-k8s-webhook-server/0.log" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.502592 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-56dbb5cfb5-ls84h_91863ede-5184-40d2-8fba-1f65d6fdc785/manager/0.log" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.552361 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-58b8447d8-56lmr_c2ca8e92-ef3f-442a-830f-0e3c37d76087/webhook-server/0.log" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.567038 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.567062 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:10:35 crc kubenswrapper[4835]: E0201 08:10:35.567271 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.567594 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.567659 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.567747 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:10:35 crc kubenswrapper[4835]: E0201 08:10:35.568012 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.716625 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8s85p_0975cec6-f6ff-4188-9435-864a46ad1740/kube-rbac-proxy/0.log" Feb 01 08:10:35 crc kubenswrapper[4835]: I0201 08:10:35.775798 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-8s85p_0975cec6-f6ff-4188-9435-864a46ad1740/speaker/0.log" Feb 01 08:10:38 crc kubenswrapper[4835]: I0201 08:10:38.566106 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:10:38 crc kubenswrapper[4835]: I0201 08:10:38.567665 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:10:38 crc kubenswrapper[4835]: E0201 08:10:38.568099 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:10:39 crc kubenswrapper[4835]: I0201 08:10:39.567688 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:10:39 crc kubenswrapper[4835]: I0201 08:10:39.568172 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:10:39 crc kubenswrapper[4835]: I0201 08:10:39.568230 4835 scope.go:117] "RemoveContainer" containerID="a173a7d4dfce7a09af6df1da942081f7f4d13b9bb491a5259c66bbecc01f055e" Feb 01 08:10:39 crc kubenswrapper[4835]: I0201 08:10:39.568387 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:10:39 crc kubenswrapper[4835]: E0201 08:10:39.569089 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:10:41 crc kubenswrapper[4835]: I0201 08:10:41.567082 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:10:41 crc kubenswrapper[4835]: E0201 08:10:41.567721 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:10:48 crc kubenswrapper[4835]: I0201 08:10:48.567490 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:10:48 crc kubenswrapper[4835]: I0201 08:10:48.568127 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:10:48 crc kubenswrapper[4835]: I0201 08:10:48.568311 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:10:48 crc kubenswrapper[4835]: E0201 08:10:48.568392 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:10:48 crc kubenswrapper[4835]: I0201 08:10:48.568466 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:10:48 crc kubenswrapper[4835]: I0201 08:10:48.568545 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:10:48 crc kubenswrapper[4835]: I0201 08:10:48.568642 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:10:48 crc kubenswrapper[4835]: I0201 08:10:48.568658 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:10:48 crc kubenswrapper[4835]: I0201 08:10:48.568722 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:10:48 crc kubenswrapper[4835]: E0201 08:10:48.849825 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:10:49 crc kubenswrapper[4835]: I0201 08:10:49.194754 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e"} Feb 01 08:10:49 crc kubenswrapper[4835]: I0201 08:10:49.195360 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:10:49 crc kubenswrapper[4835]: I0201 08:10:49.195434 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:10:49 crc kubenswrapper[4835]: I0201 08:10:49.195512 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:10:49 crc kubenswrapper[4835]: I0201 08:10:49.195520 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:10:49 crc kubenswrapper[4835]: I0201 08:10:49.195550 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:10:49 crc kubenswrapper[4835]: E0201 08:10:49.195825 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:10:50 crc kubenswrapper[4835]: I0201 08:10:50.567242 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:10:50 crc kubenswrapper[4835]: I0201 08:10:50.567310 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:10:50 crc kubenswrapper[4835]: I0201 08:10:50.567424 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:10:50 crc kubenswrapper[4835]: E0201 08:10:50.567706 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:10:50 crc kubenswrapper[4835]: I0201 08:10:50.650668 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-api-6966d58856-gg77m_6a69ee37-d1ea-4c2f-880a-1edb52d4352c/barbican-api-log/0.log" Feb 01 08:10:50 crc kubenswrapper[4835]: I0201 08:10:50.660029 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-api-6966d58856-gg77m_6a69ee37-d1ea-4c2f-880a-1edb52d4352c/barbican-api/0.log" Feb 01 08:10:50 crc kubenswrapper[4835]: I0201 08:10:50.794717 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-db-sync-ll8z7_b13e8606-6ec5-4e1b-a3fd-30f8eac5809a/barbican-db-sync/0.log" Feb 01 08:10:50 crc kubenswrapper[4835]: I0201 08:10:50.844862 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-keystone-listener-77cb446946-46jb6_8653dceb-2d4e-419e-aa35-37bdca49dc2c/barbican-keystone-listener/0.log" Feb 01 08:10:50 crc kubenswrapper[4835]: I0201 08:10:50.991461 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-worker-794b798997-b6znz_c8bf5a1c-707a-4858-a716-7bc593ef0fc3/barbican-worker/0.log" Feb 01 08:10:51 crc kubenswrapper[4835]: I0201 08:10:51.008959 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-keystone-listener-77cb446946-46jb6_8653dceb-2d4e-419e-aa35-37bdca49dc2c/barbican-keystone-listener-log/0.log" Feb 01 08:10:51 crc kubenswrapper[4835]: I0201 08:10:51.053581 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_barbican-worker-794b798997-b6znz_c8bf5a1c-707a-4858-a716-7bc593ef0fc3/barbican-worker-log/0.log" Feb 01 08:10:51 crc kubenswrapper[4835]: I0201 08:10:51.291401 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_keystone-cron-29498881-kfzg5_f0c36c8d-897d-4b88-a236-44fe0d511c4e/keystone-cron/0.log" Feb 01 08:10:51 crc kubenswrapper[4835]: I0201 08:10:51.423568 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_keystone-95fb65664-fmplj_99f218fc-86ce-4952-a7cd-4c80a7cfe774/keystone-api/0.log" Feb 01 08:10:51 crc kubenswrapper[4835]: I0201 08:10:51.566766 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:10:51 crc kubenswrapper[4835]: I0201 08:10:51.567026 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:10:51 crc kubenswrapper[4835]: I0201 08:10:51.567047 4835 scope.go:117] "RemoveContainer" containerID="a173a7d4dfce7a09af6df1da942081f7f4d13b9bb491a5259c66bbecc01f055e" Feb 01 08:10:51 crc kubenswrapper[4835]: I0201 08:10:51.567103 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:10:51 crc kubenswrapper[4835]: E0201 08:10:51.567364 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:10:51 crc kubenswrapper[4835]: I0201 08:10:51.597235 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-0_d1414aa9-85a0-4ed8-b897-0afc315eacf6/mysql-bootstrap/0.log" Feb 01 08:10:51 crc kubenswrapper[4835]: I0201 08:10:51.737657 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-0_d1414aa9-85a0-4ed8-b897-0afc315eacf6/galera/0.log" Feb 01 08:10:51 crc kubenswrapper[4835]: I0201 08:10:51.742108 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-0_d1414aa9-85a0-4ed8-b897-0afc315eacf6/mysql-bootstrap/0.log" Feb 01 08:10:51 crc kubenswrapper[4835]: I0201 08:10:51.991417 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-1_b44d32e5-044c-42e2-a6c8-eb93e48219f2/mysql-bootstrap/0.log" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.170159 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-1_b44d32e5-044c-42e2-a6c8-eb93e48219f2/mysql-bootstrap/0.log" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.177151 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_memcached-0_37529abc-a5d7-416b-8ea4-c6f0542ab3a8/memcached/0.log" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.195327 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-1_b44d32e5-044c-42e2-a6c8-eb93e48219f2/galera/0.log" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.344811 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-2_f271d73a-6ed8-4c97-b087-c6b3287c11e4/mysql-bootstrap/0.log" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.511732 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-2_f271d73a-6ed8-4c97-b087-c6b3287c11e4/mysql-bootstrap/0.log" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.519254 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_openstack-galera-2_f271d73a-6ed8-4c97-b087-c6b3287c11e4/galera/0.log" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.547827 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_rabbitmq-server-0_34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e/setup-container/0.log" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.566666 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:10:52 crc kubenswrapper[4835]: E0201 08:10:52.566918 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.567263 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.567280 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:10:52 crc kubenswrapper[4835]: E0201 08:10:52.567465 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.702732 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_rabbitmq-server-0_34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e/setup-container/0.log" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.739744 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_rabbitmq-server-0_34e38bb1-d3dc-46d8-8b2d-8cc583a0a70e/rabbitmq/0.log" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.776637 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-proxy-6c7f677bc9-lq29p_0449d2d9-ddcc-4eaa-84b1-9095448105f5/proxy-httpd/11.log" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.883259 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-proxy-6c7f677bc9-lq29p_0449d2d9-ddcc-4eaa-84b1-9095448105f5/proxy-httpd/11.log" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.901767 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-proxy-6c7f677bc9-lq29p_0449d2d9-ddcc-4eaa-84b1-9095448105f5/proxy-server/9.log" Feb 01 08:10:52 crc kubenswrapper[4835]: I0201 08:10:52.945734 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-proxy-6c7f677bc9-lq29p_0449d2d9-ddcc-4eaa-84b1-9095448105f5/proxy-server/9.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.084350 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-proxy-7d8cf99555-6vq9r_8ccb8908-ffc6-4032-8907-da7491bf9304/proxy-httpd/15.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.086280 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-proxy-7d8cf99555-6vq9r_8ccb8908-ffc6-4032-8907-da7491bf9304/proxy-httpd/15.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.102511 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-proxy-7d8cf99555-6vq9r_8ccb8908-ffc6-4032-8907-da7491bf9304/proxy-server/11.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.106995 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-proxy-7d8cf99555-6vq9r_8ccb8908-ffc6-4032-8907-da7491bf9304/proxy-server/11.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.267377 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/account-auditor/0.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.455693 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/account-reaper/0.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.457583 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/account-replicator/9.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.466274 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/account-replicator/9.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.507976 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/account-server/0.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.615795 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/container-auditor/0.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.632948 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/container-replicator/9.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.655742 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/container-replicator/9.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.703960 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/container-server/0.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.777026 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/container-sharder/9.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.794500 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/container-sharder/9.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.833501 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/container-updater/7.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.896715 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/container-updater/6.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.954288 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/object-auditor/0.log" Feb 01 08:10:53 crc kubenswrapper[4835]: I0201 08:10:53.981275 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/object-expirer/9.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.008692 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/object-expirer/9.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.121610 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/object-server/0.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.126783 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/object-replicator/0.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.185756 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/object-updater/6.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.237895 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/object-updater/6.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.272990 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/rsync/0.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.281575 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-0_f2e2f8e4-eb90-4d97-8796-8f5d196577ce/swift-recon-cron/0.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.347019 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/account-auditor/0.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.406690 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/account-reaper/0.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.437763 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/account-replicator/7.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.453208 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/account-replicator/7.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.480672 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/account-server/0.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.526512 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/container-auditor/0.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.607684 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/container-replicator/7.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.610636 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/container-server/0.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.616060 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/container-replicator/7.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.647453 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/container-updater/4.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.705960 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/container-updater/4.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.786083 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/object-auditor/0.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.789036 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/object-expirer/7.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.806174 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/object-expirer/7.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.842995 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/object-replicator/0.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.864337 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/object-server/0.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.940134 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/object-updater/3.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.969063 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/rsync/0.log" Feb 01 08:10:54 crc kubenswrapper[4835]: I0201 08:10:54.987914 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/object-updater/2.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.123776 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-1_559d52a7-a172-4c3c-aa13-ba07036485e1/swift-recon-cron/0.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.210769 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/account-auditor/0.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.252807 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/account-replicator/7.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.283369 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/account-reaper/0.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.285011 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/account-replicator/7.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.309890 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/account-server/0.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.399831 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/container-auditor/0.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.409942 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/container-replicator/7.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.439724 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/container-replicator/7.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.498945 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/container-server/0.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.499188 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/container-updater/4.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.553438 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/container-updater/3.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.598157 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/object-auditor/0.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.620478 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/object-expirer/7.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.679108 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/object-replicator/0.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.683156 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/object-expirer/7.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.704476 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/object-server/0.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.782664 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/object-updater/5.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.814499 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/object-updater/4.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.855278 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/rsync/0.log" Feb 01 08:10:55 crc kubenswrapper[4835]: I0201 08:10:55.865224 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-storage-2_69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef/swift-recon-cron/0.log" Feb 01 08:10:59 crc kubenswrapper[4835]: I0201 08:10:59.569753 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:10:59 crc kubenswrapper[4835]: I0201 08:10:59.570302 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:10:59 crc kubenswrapper[4835]: E0201 08:10:59.570562 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:11:00 crc kubenswrapper[4835]: I0201 08:11:00.568146 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:11:00 crc kubenswrapper[4835]: I0201 08:11:00.568288 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:11:00 crc kubenswrapper[4835]: I0201 08:11:00.568494 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:11:00 crc kubenswrapper[4835]: I0201 08:11:00.568512 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:11:00 crc kubenswrapper[4835]: I0201 08:11:00.568579 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:11:00 crc kubenswrapper[4835]: E0201 08:11:00.569207 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:11:04 crc kubenswrapper[4835]: I0201 08:11:04.567515 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:11:04 crc kubenswrapper[4835]: I0201 08:11:04.568176 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:11:04 crc kubenswrapper[4835]: I0201 08:11:04.568255 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:11:04 crc kubenswrapper[4835]: E0201 08:11:04.568267 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:11:04 crc kubenswrapper[4835]: I0201 08:11:04.568344 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:11:04 crc kubenswrapper[4835]: E0201 08:11:04.568639 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:11:05 crc kubenswrapper[4835]: I0201 08:11:05.567863 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:11:05 crc kubenswrapper[4835]: I0201 08:11:05.567932 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:11:05 crc kubenswrapper[4835]: I0201 08:11:05.567951 4835 scope.go:117] "RemoveContainer" containerID="a173a7d4dfce7a09af6df1da942081f7f4d13b9bb491a5259c66bbecc01f055e" Feb 01 08:11:05 crc kubenswrapper[4835]: I0201 08:11:05.568008 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:11:05 crc kubenswrapper[4835]: E0201 08:11:05.568277 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:11:05 crc kubenswrapper[4835]: I0201 08:11:05.597763 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:11:05 crc kubenswrapper[4835]: E0201 08:11:05.597991 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 08:11:05 crc kubenswrapper[4835]: E0201 08:11:05.598113 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 08:13:07.598082888 +0000 UTC m=+3060.718519362 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 08:11:06 crc kubenswrapper[4835]: I0201 08:11:06.566984 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:11:06 crc kubenswrapper[4835]: I0201 08:11:06.567024 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:11:06 crc kubenswrapper[4835]: E0201 08:11:06.567297 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:11:09 crc kubenswrapper[4835]: I0201 08:11:09.143247 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_042bee18-1826-42db-a17a-6f0e3d488c16/util/0.log" Feb 01 08:11:09 crc kubenswrapper[4835]: I0201 08:11:09.316039 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_042bee18-1826-42db-a17a-6f0e3d488c16/util/0.log" Feb 01 08:11:09 crc kubenswrapper[4835]: I0201 08:11:09.340309 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_042bee18-1826-42db-a17a-6f0e3d488c16/pull/0.log" Feb 01 08:11:09 crc kubenswrapper[4835]: I0201 08:11:09.373749 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_042bee18-1826-42db-a17a-6f0e3d488c16/pull/0.log" Feb 01 08:11:09 crc kubenswrapper[4835]: I0201 08:11:09.559854 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_042bee18-1826-42db-a17a-6f0e3d488c16/pull/0.log" Feb 01 08:11:09 crc kubenswrapper[4835]: I0201 08:11:09.561084 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_042bee18-1826-42db-a17a-6f0e3d488c16/util/0.log" Feb 01 08:11:09 crc kubenswrapper[4835]: I0201 08:11:09.561521 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckv28g_042bee18-1826-42db-a17a-6f0e3d488c16/extract/0.log" Feb 01 08:11:09 crc kubenswrapper[4835]: I0201 08:11:09.713759 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wqgsq_5cb5bbc9-0e87-45ed-897f-6e343be075d5/extract-utilities/0.log" Feb 01 08:11:09 crc kubenswrapper[4835]: I0201 08:11:09.902976 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wqgsq_5cb5bbc9-0e87-45ed-897f-6e343be075d5/extract-utilities/0.log" Feb 01 08:11:09 crc kubenswrapper[4835]: I0201 08:11:09.917660 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wqgsq_5cb5bbc9-0e87-45ed-897f-6e343be075d5/extract-content/0.log" Feb 01 08:11:09 crc kubenswrapper[4835]: I0201 08:11:09.917828 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wqgsq_5cb5bbc9-0e87-45ed-897f-6e343be075d5/extract-content/0.log" Feb 01 08:11:10 crc kubenswrapper[4835]: I0201 08:11:10.095033 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wqgsq_5cb5bbc9-0e87-45ed-897f-6e343be075d5/extract-content/0.log" Feb 01 08:11:10 crc kubenswrapper[4835]: I0201 08:11:10.169125 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wqgsq_5cb5bbc9-0e87-45ed-897f-6e343be075d5/extract-utilities/0.log" Feb 01 08:11:10 crc kubenswrapper[4835]: I0201 08:11:10.339587 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w65gv_7f1e8788-786f-4f9d-b492-3a036764b28d/extract-utilities/0.log" Feb 01 08:11:10 crc kubenswrapper[4835]: I0201 08:11:10.564694 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w65gv_7f1e8788-786f-4f9d-b492-3a036764b28d/extract-content/0.log" Feb 01 08:11:10 crc kubenswrapper[4835]: I0201 08:11:10.564791 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w65gv_7f1e8788-786f-4f9d-b492-3a036764b28d/extract-utilities/0.log" Feb 01 08:11:10 crc kubenswrapper[4835]: I0201 08:11:10.598163 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w65gv_7f1e8788-786f-4f9d-b492-3a036764b28d/extract-content/0.log" Feb 01 08:11:10 crc kubenswrapper[4835]: I0201 08:11:10.637278 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wqgsq_5cb5bbc9-0e87-45ed-897f-6e343be075d5/registry-server/0.log" Feb 01 08:11:10 crc kubenswrapper[4835]: I0201 08:11:10.783196 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w65gv_7f1e8788-786f-4f9d-b492-3a036764b28d/extract-utilities/0.log" Feb 01 08:11:10 crc kubenswrapper[4835]: I0201 08:11:10.803526 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w65gv_7f1e8788-786f-4f9d-b492-3a036764b28d/extract-content/0.log" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.097980 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w65gv_7f1e8788-786f-4f9d-b492-3a036764b28d/registry-server/0.log" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.117478 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-www9n_c2481990-b703-4792-b5b0-549daf22e66a/marketplace-operator/0.log" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.134644 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ghmxq_0155c2ce-1bd0-424d-931f-132c22e7a42e/extract-utilities/0.log" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.305839 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ghmxq_0155c2ce-1bd0-424d-931f-132c22e7a42e/extract-utilities/0.log" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.326858 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ghmxq_0155c2ce-1bd0-424d-931f-132c22e7a42e/extract-content/0.log" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.332567 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ghmxq_0155c2ce-1bd0-424d-931f-132c22e7a42e/extract-content/0.log" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.506169 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ghmxq_0155c2ce-1bd0-424d-931f-132c22e7a42e/extract-utilities/0.log" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.511206 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ghmxq_0155c2ce-1bd0-424d-931f-132c22e7a42e/extract-content/0.log" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.566736 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.567108 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.567205 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.567217 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.567260 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:11:11 crc kubenswrapper[4835]: E0201 08:11:11.567649 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.608837 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-ghmxq_0155c2ce-1bd0-424d-931f-132c22e7a42e/registry-server/0.log" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.724932 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-75mhs_5fead728-7b7f-4ee9-b01e-455d536a88c5/extract-utilities/0.log" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.890447 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-75mhs_5fead728-7b7f-4ee9-b01e-455d536a88c5/extract-content/0.log" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.912267 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-75mhs_5fead728-7b7f-4ee9-b01e-455d536a88c5/extract-utilities/0.log" Feb 01 08:11:11 crc kubenswrapper[4835]: I0201 08:11:11.932063 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-75mhs_5fead728-7b7f-4ee9-b01e-455d536a88c5/extract-content/0.log" Feb 01 08:11:12 crc kubenswrapper[4835]: I0201 08:11:12.074622 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-75mhs_5fead728-7b7f-4ee9-b01e-455d536a88c5/extract-utilities/0.log" Feb 01 08:11:12 crc kubenswrapper[4835]: I0201 08:11:12.128376 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-75mhs_5fead728-7b7f-4ee9-b01e-455d536a88c5/extract-content/0.log" Feb 01 08:11:12 crc kubenswrapper[4835]: I0201 08:11:12.308591 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-75mhs_5fead728-7b7f-4ee9-b01e-455d536a88c5/registry-server/0.log" Feb 01 08:11:14 crc kubenswrapper[4835]: I0201 08:11:14.567009 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:11:14 crc kubenswrapper[4835]: I0201 08:11:14.567495 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:11:14 crc kubenswrapper[4835]: E0201 08:11:14.567975 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:11:16 crc kubenswrapper[4835]: I0201 08:11:16.568464 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:11:16 crc kubenswrapper[4835]: I0201 08:11:16.568604 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:11:16 crc kubenswrapper[4835]: I0201 08:11:16.568644 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:11:16 crc kubenswrapper[4835]: I0201 08:11:16.568811 4835 scope.go:117] "RemoveContainer" containerID="a173a7d4dfce7a09af6df1da942081f7f4d13b9bb491a5259c66bbecc01f055e" Feb 01 08:11:16 crc kubenswrapper[4835]: I0201 08:11:16.568996 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:11:16 crc kubenswrapper[4835]: E0201 08:11:16.569018 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:11:16 crc kubenswrapper[4835]: E0201 08:11:16.569458 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:11:18 crc kubenswrapper[4835]: I0201 08:11:18.567379 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:11:18 crc kubenswrapper[4835]: I0201 08:11:18.567776 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:11:18 crc kubenswrapper[4835]: I0201 08:11:18.567863 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:11:18 crc kubenswrapper[4835]: E0201 08:11:18.568177 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:11:21 crc kubenswrapper[4835]: I0201 08:11:21.567533 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:11:21 crc kubenswrapper[4835]: I0201 08:11:21.567891 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:11:21 crc kubenswrapper[4835]: E0201 08:11:21.568268 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:11:21 crc kubenswrapper[4835]: E0201 08:11:21.706081 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 08:11:22 crc kubenswrapper[4835]: I0201 08:11:22.447752 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:11:26 crc kubenswrapper[4835]: I0201 08:11:26.567572 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:11:26 crc kubenswrapper[4835]: I0201 08:11:26.567871 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:11:26 crc kubenswrapper[4835]: I0201 08:11:26.567944 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:11:26 crc kubenswrapper[4835]: I0201 08:11:26.567951 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:11:26 crc kubenswrapper[4835]: I0201 08:11:26.567982 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:11:26 crc kubenswrapper[4835]: E0201 08:11:26.568283 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:11:27 crc kubenswrapper[4835]: I0201 08:11:27.580848 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:11:27 crc kubenswrapper[4835]: I0201 08:11:27.580890 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:11:27 crc kubenswrapper[4835]: I0201 08:11:27.580937 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:11:27 crc kubenswrapper[4835]: E0201 08:11:27.581449 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:11:27 crc kubenswrapper[4835]: E0201 08:11:27.581533 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:11:29 crc kubenswrapper[4835]: I0201 08:11:29.567898 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:11:29 crc kubenswrapper[4835]: I0201 08:11:29.568023 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:11:29 crc kubenswrapper[4835]: I0201 08:11:29.568186 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:11:29 crc kubenswrapper[4835]: E0201 08:11:29.568834 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:11:31 crc kubenswrapper[4835]: I0201 08:11:31.568186 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:11:31 crc kubenswrapper[4835]: I0201 08:11:31.568635 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:11:31 crc kubenswrapper[4835]: I0201 08:11:31.568682 4835 scope.go:117] "RemoveContainer" containerID="a173a7d4dfce7a09af6df1da942081f7f4d13b9bb491a5259c66bbecc01f055e" Feb 01 08:11:31 crc kubenswrapper[4835]: I0201 08:11:31.568819 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:11:31 crc kubenswrapper[4835]: E0201 08:11:31.810705 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:11:32 crc kubenswrapper[4835]: I0201 08:11:32.537734 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0"} Feb 01 08:11:32 crc kubenswrapper[4835]: I0201 08:11:32.538575 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:11:32 crc kubenswrapper[4835]: I0201 08:11:32.538652 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:11:32 crc kubenswrapper[4835]: I0201 08:11:32.538767 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:11:32 crc kubenswrapper[4835]: E0201 08:11:32.539241 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:11:36 crc kubenswrapper[4835]: I0201 08:11:36.567167 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:11:36 crc kubenswrapper[4835]: I0201 08:11:36.567780 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:11:36 crc kubenswrapper[4835]: E0201 08:11:36.568122 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:11:36 crc kubenswrapper[4835]: I0201 08:11:36.582440 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" exitCode=1 Feb 01 08:11:36 crc kubenswrapper[4835]: I0201 08:11:36.582497 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e"} Feb 01 08:11:36 crc kubenswrapper[4835]: I0201 08:11:36.582564 4835 scope.go:117] "RemoveContainer" containerID="4b0df01d34d12ae17d155ae36b92b2f522572459ddefaa32e896e7c20c113098" Feb 01 08:11:36 crc kubenswrapper[4835]: I0201 08:11:36.583689 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:11:36 crc kubenswrapper[4835]: I0201 08:11:36.583800 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:11:36 crc kubenswrapper[4835]: I0201 08:11:36.583853 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:11:36 crc kubenswrapper[4835]: I0201 08:11:36.583965 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:11:36 crc kubenswrapper[4835]: I0201 08:11:36.583985 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:11:36 crc kubenswrapper[4835]: I0201 08:11:36.584052 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:11:36 crc kubenswrapper[4835]: E0201 08:11:36.584736 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:11:39 crc kubenswrapper[4835]: I0201 08:11:39.567309 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:11:39 crc kubenswrapper[4835]: E0201 08:11:39.567830 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:11:40 crc kubenswrapper[4835]: I0201 08:11:40.567758 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:11:40 crc kubenswrapper[4835]: I0201 08:11:40.568166 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:11:40 crc kubenswrapper[4835]: E0201 08:11:40.568639 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:11:43 crc kubenswrapper[4835]: I0201 08:11:43.567641 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:11:43 crc kubenswrapper[4835]: I0201 08:11:43.568008 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:11:43 crc kubenswrapper[4835]: I0201 08:11:43.568096 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:11:43 crc kubenswrapper[4835]: E0201 08:11:43.572370 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:11:44 crc kubenswrapper[4835]: I0201 08:11:44.568140 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:11:44 crc kubenswrapper[4835]: I0201 08:11:44.568336 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:11:44 crc kubenswrapper[4835]: I0201 08:11:44.568599 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:11:44 crc kubenswrapper[4835]: E0201 08:11:44.569626 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:11:50 crc kubenswrapper[4835]: I0201 08:11:50.566892 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:11:50 crc kubenswrapper[4835]: I0201 08:11:50.567227 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:11:50 crc kubenswrapper[4835]: I0201 08:11:50.569171 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:11:50 crc kubenswrapper[4835]: I0201 08:11:50.569488 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:11:50 crc kubenswrapper[4835]: I0201 08:11:50.569586 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:11:50 crc kubenswrapper[4835]: I0201 08:11:50.569684 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:11:50 crc kubenswrapper[4835]: I0201 08:11:50.569703 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:11:50 crc kubenswrapper[4835]: I0201 08:11:50.569781 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:11:50 crc kubenswrapper[4835]: E0201 08:11:50.571885 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:11:50 crc kubenswrapper[4835]: E0201 08:11:50.572072 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:11:51 crc kubenswrapper[4835]: I0201 08:11:51.575192 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:11:51 crc kubenswrapper[4835]: E0201 08:11:51.576866 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:11:51 crc kubenswrapper[4835]: I0201 08:11:51.579097 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:11:51 crc kubenswrapper[4835]: I0201 08:11:51.579283 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:11:51 crc kubenswrapper[4835]: E0201 08:11:51.821512 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:11:52 crc kubenswrapper[4835]: I0201 08:11:52.737030 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"b0762087810217898e4c9db3485210e50096a89f21ff2bb70ea52611f0c43b3e"} Feb 01 08:11:52 crc kubenswrapper[4835]: I0201 08:11:52.737733 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:11:52 crc kubenswrapper[4835]: I0201 08:11:52.738314 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:11:52 crc kubenswrapper[4835]: E0201 08:11:52.738628 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:11:53 crc kubenswrapper[4835]: I0201 08:11:53.753216 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:11:53 crc kubenswrapper[4835]: E0201 08:11:53.753695 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:11:56 crc kubenswrapper[4835]: I0201 08:11:56.567553 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:11:56 crc kubenswrapper[4835]: I0201 08:11:56.568122 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:11:56 crc kubenswrapper[4835]: I0201 08:11:56.568305 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:11:56 crc kubenswrapper[4835]: E0201 08:11:56.568933 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:11:56 crc kubenswrapper[4835]: I0201 08:11:56.571439 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:11:56 crc kubenswrapper[4835]: I0201 08:11:56.571561 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:11:56 crc kubenswrapper[4835]: I0201 08:11:56.571747 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:11:56 crc kubenswrapper[4835]: E0201 08:11:56.572166 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:11:58 crc kubenswrapper[4835]: I0201 08:11:58.022281 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:12:00 crc kubenswrapper[4835]: I0201 08:12:00.021554 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:12:01 crc kubenswrapper[4835]: I0201 08:12:01.023658 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.021015 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.021518 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.022008 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"b0762087810217898e4c9db3485210e50096a89f21ff2bb70ea52611f0c43b3e"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.022027 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.022055 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://b0762087810217898e4c9db3485210e50096a89f21ff2bb70ea52611f0c43b3e" gracePeriod=30 Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.023137 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:12:04 crc kubenswrapper[4835]: E0201 08:12:04.321880 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.567584 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.567994 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.568039 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.568078 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.568314 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.568486 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.568510 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:12:04 crc kubenswrapper[4835]: E0201 08:12:04.568502 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.568586 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:12:04 crc kubenswrapper[4835]: E0201 08:12:04.785220 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.869233 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="b0762087810217898e4c9db3485210e50096a89f21ff2bb70ea52611f0c43b3e" exitCode=0 Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.869358 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"b0762087810217898e4c9db3485210e50096a89f21ff2bb70ea52611f0c43b3e"} Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.869393 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerStarted","Data":"88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395"} Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.869438 4835 scope.go:117] "RemoveContainer" containerID="ee781ed8abd6d4677950e8833014c029aac0581f7778b2b0cf90cbe45aa47140" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.870222 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:12:04 crc kubenswrapper[4835]: E0201 08:12:04.870481 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.869976 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.890361 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerStarted","Data":"0497b6fa825fe5c685a142a45b83cba6c78cee875feeb8c8d363023fb9cbab30"} Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.891204 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.891287 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.891317 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.891395 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:12:04 crc kubenswrapper[4835]: I0201 08:12:04.891461 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:12:04 crc kubenswrapper[4835]: E0201 08:12:04.891794 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:12:05 crc kubenswrapper[4835]: I0201 08:12:05.909961 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:12:05 crc kubenswrapper[4835]: E0201 08:12:05.910237 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:12:06 crc kubenswrapper[4835]: I0201 08:12:06.568085 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:12:06 crc kubenswrapper[4835]: E0201 08:12:06.568694 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:12:08 crc kubenswrapper[4835]: I0201 08:12:08.567191 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:12:08 crc kubenswrapper[4835]: I0201 08:12:08.567594 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:12:08 crc kubenswrapper[4835]: I0201 08:12:08.567645 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:12:08 crc kubenswrapper[4835]: I0201 08:12:08.567710 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:12:08 crc kubenswrapper[4835]: I0201 08:12:08.567719 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:12:08 crc kubenswrapper[4835]: I0201 08:12:08.567794 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:12:08 crc kubenswrapper[4835]: E0201 08:12:08.568025 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:12:08 crc kubenswrapper[4835]: E0201 08:12:08.568120 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:12:10 crc kubenswrapper[4835]: I0201 08:12:10.020901 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:12:10 crc kubenswrapper[4835]: I0201 08:12:10.021204 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.020453 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.585315 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ht7np"] Feb 01 08:12:13 crc kubenswrapper[4835]: E0201 08:12:13.587603 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="655042b7-c713-4116-b191-f8e9c03ac3b0" containerName="extract-content" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.587645 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="655042b7-c713-4116-b191-f8e9c03ac3b0" containerName="extract-content" Feb 01 08:12:13 crc kubenswrapper[4835]: E0201 08:12:13.587684 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="655042b7-c713-4116-b191-f8e9c03ac3b0" containerName="extract-utilities" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.587701 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="655042b7-c713-4116-b191-f8e9c03ac3b0" containerName="extract-utilities" Feb 01 08:12:13 crc kubenswrapper[4835]: E0201 08:12:13.587724 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="655042b7-c713-4116-b191-f8e9c03ac3b0" containerName="registry-server" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.587734 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="655042b7-c713-4116-b191-f8e9c03ac3b0" containerName="registry-server" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.588115 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="655042b7-c713-4116-b191-f8e9c03ac3b0" containerName="registry-server" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.590197 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.608155 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ht7np"] Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.689697 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83bc0253-a027-4b59-ae32-e1c1279057c8-catalog-content\") pod \"community-operators-ht7np\" (UID: \"83bc0253-a027-4b59-ae32-e1c1279057c8\") " pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.689750 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83bc0253-a027-4b59-ae32-e1c1279057c8-utilities\") pod \"community-operators-ht7np\" (UID: \"83bc0253-a027-4b59-ae32-e1c1279057c8\") " pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.690050 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dhhb\" (UniqueName: \"kubernetes.io/projected/83bc0253-a027-4b59-ae32-e1c1279057c8-kube-api-access-8dhhb\") pod \"community-operators-ht7np\" (UID: \"83bc0253-a027-4b59-ae32-e1c1279057c8\") " pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.791975 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dhhb\" (UniqueName: \"kubernetes.io/projected/83bc0253-a027-4b59-ae32-e1c1279057c8-kube-api-access-8dhhb\") pod \"community-operators-ht7np\" (UID: \"83bc0253-a027-4b59-ae32-e1c1279057c8\") " pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.792067 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83bc0253-a027-4b59-ae32-e1c1279057c8-catalog-content\") pod \"community-operators-ht7np\" (UID: \"83bc0253-a027-4b59-ae32-e1c1279057c8\") " pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.792097 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83bc0253-a027-4b59-ae32-e1c1279057c8-utilities\") pod \"community-operators-ht7np\" (UID: \"83bc0253-a027-4b59-ae32-e1c1279057c8\") " pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.792582 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83bc0253-a027-4b59-ae32-e1c1279057c8-catalog-content\") pod \"community-operators-ht7np\" (UID: \"83bc0253-a027-4b59-ae32-e1c1279057c8\") " pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.792607 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83bc0253-a027-4b59-ae32-e1c1279057c8-utilities\") pod \"community-operators-ht7np\" (UID: \"83bc0253-a027-4b59-ae32-e1c1279057c8\") " pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.823162 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dhhb\" (UniqueName: \"kubernetes.io/projected/83bc0253-a027-4b59-ae32-e1c1279057c8-kube-api-access-8dhhb\") pod \"community-operators-ht7np\" (UID: \"83bc0253-a027-4b59-ae32-e1c1279057c8\") " pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:13 crc kubenswrapper[4835]: I0201 08:12:13.907843 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:14 crc kubenswrapper[4835]: I0201 08:12:14.379740 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ht7np"] Feb 01 08:12:14 crc kubenswrapper[4835]: I0201 08:12:14.993036 4835 generic.go:334] "Generic (PLEG): container finished" podID="83bc0253-a027-4b59-ae32-e1c1279057c8" containerID="257999e3db2b74db22341c8d2cd296a015048fbe6925996c672901787785ecff" exitCode=0 Feb 01 08:12:14 crc kubenswrapper[4835]: I0201 08:12:14.993109 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ht7np" event={"ID":"83bc0253-a027-4b59-ae32-e1c1279057c8","Type":"ContainerDied","Data":"257999e3db2b74db22341c8d2cd296a015048fbe6925996c672901787785ecff"} Feb 01 08:12:14 crc kubenswrapper[4835]: I0201 08:12:14.993155 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ht7np" event={"ID":"83bc0253-a027-4b59-ae32-e1c1279057c8","Type":"ContainerStarted","Data":"1adfdeac740f8473883190bbbf16ff1b597929b666ea3516bfd2b4a2d6d415b6"} Feb 01 08:12:15 crc kubenswrapper[4835]: I0201 08:12:15.022695 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:12:16 crc kubenswrapper[4835]: I0201 08:12:16.004985 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ht7np" event={"ID":"83bc0253-a027-4b59-ae32-e1c1279057c8","Type":"ContainerStarted","Data":"615680d44233ef1c6513a02550240e25fb6a1832f88f299d98d631aaa5490d5a"} Feb 01 08:12:16 crc kubenswrapper[4835]: I0201 08:12:16.022489 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:12:16 crc kubenswrapper[4835]: I0201 08:12:16.022571 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" Feb 01 08:12:16 crc kubenswrapper[4835]: I0201 08:12:16.023382 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395"} pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 08:12:16 crc kubenswrapper[4835]: I0201 08:12:16.023422 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:12:16 crc kubenswrapper[4835]: I0201 08:12:16.023453 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" containerID="cri-o://88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" gracePeriod=30 Feb 01 08:12:16 crc kubenswrapper[4835]: I0201 08:12:16.025962 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:12:16 crc kubenswrapper[4835]: E0201 08:12:16.186874 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.016641 4835 generic.go:334] "Generic (PLEG): container finished" podID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" exitCode=0 Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.016924 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" event={"ID":"0449d2d9-ddcc-4eaa-84b1-9095448105f5","Type":"ContainerDied","Data":"88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395"} Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.018173 4835 scope.go:117] "RemoveContainer" containerID="b0762087810217898e4c9db3485210e50096a89f21ff2bb70ea52611f0c43b3e" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.019494 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.019701 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.020236 4835 generic.go:334] "Generic (PLEG): container finished" podID="83bc0253-a027-4b59-ae32-e1c1279057c8" containerID="615680d44233ef1c6513a02550240e25fb6a1832f88f299d98d631aaa5490d5a" exitCode=0 Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.020282 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ht7np" event={"ID":"83bc0253-a027-4b59-ae32-e1c1279057c8","Type":"ContainerDied","Data":"615680d44233ef1c6513a02550240e25fb6a1832f88f299d98d631aaa5490d5a"} Feb 01 08:12:17 crc kubenswrapper[4835]: E0201 08:12:17.020802 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.575027 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:12:17 crc kubenswrapper[4835]: E0201 08:12:17.575341 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.575496 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.575518 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:12:17 crc kubenswrapper[4835]: E0201 08:12:17.575744 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.591985 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g6wxb"] Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.593780 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.601174 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6wxb"] Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.659039 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pvh8\" (UniqueName: \"kubernetes.io/projected/c90c5237-f023-4eab-b902-e86f65ad245e-kube-api-access-6pvh8\") pod \"redhat-marketplace-g6wxb\" (UID: \"c90c5237-f023-4eab-b902-e86f65ad245e\") " pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.659142 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c90c5237-f023-4eab-b902-e86f65ad245e-catalog-content\") pod \"redhat-marketplace-g6wxb\" (UID: \"c90c5237-f023-4eab-b902-e86f65ad245e\") " pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.659207 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c90c5237-f023-4eab-b902-e86f65ad245e-utilities\") pod \"redhat-marketplace-g6wxb\" (UID: \"c90c5237-f023-4eab-b902-e86f65ad245e\") " pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.761246 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c90c5237-f023-4eab-b902-e86f65ad245e-catalog-content\") pod \"redhat-marketplace-g6wxb\" (UID: \"c90c5237-f023-4eab-b902-e86f65ad245e\") " pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.761329 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c90c5237-f023-4eab-b902-e86f65ad245e-utilities\") pod \"redhat-marketplace-g6wxb\" (UID: \"c90c5237-f023-4eab-b902-e86f65ad245e\") " pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.761639 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pvh8\" (UniqueName: \"kubernetes.io/projected/c90c5237-f023-4eab-b902-e86f65ad245e-kube-api-access-6pvh8\") pod \"redhat-marketplace-g6wxb\" (UID: \"c90c5237-f023-4eab-b902-e86f65ad245e\") " pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.761786 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c90c5237-f023-4eab-b902-e86f65ad245e-catalog-content\") pod \"redhat-marketplace-g6wxb\" (UID: \"c90c5237-f023-4eab-b902-e86f65ad245e\") " pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.761875 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c90c5237-f023-4eab-b902-e86f65ad245e-utilities\") pod \"redhat-marketplace-g6wxb\" (UID: \"c90c5237-f023-4eab-b902-e86f65ad245e\") " pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.787351 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pvh8\" (UniqueName: \"kubernetes.io/projected/c90c5237-f023-4eab-b902-e86f65ad245e-kube-api-access-6pvh8\") pod \"redhat-marketplace-g6wxb\" (UID: \"c90c5237-f023-4eab-b902-e86f65ad245e\") " pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:17 crc kubenswrapper[4835]: I0201 08:12:17.956534 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:18 crc kubenswrapper[4835]: I0201 08:12:18.032397 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ht7np" event={"ID":"83bc0253-a027-4b59-ae32-e1c1279057c8","Type":"ContainerStarted","Data":"9112aca4ab0f886aa1e24b4cfd392caaf93cbfb45f02e398024e566fc9d33796"} Feb 01 08:12:18 crc kubenswrapper[4835]: I0201 08:12:18.053057 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ht7np" podStartSLOduration=2.535189397 podStartE2EDuration="5.053037624s" podCreationTimestamp="2026-02-01 08:12:13 +0000 UTC" firstStartedPulling="2026-02-01 08:12:14.995329514 +0000 UTC m=+3008.115765988" lastFinishedPulling="2026-02-01 08:12:17.513177771 +0000 UTC m=+3010.633614215" observedRunningTime="2026-02-01 08:12:18.051718989 +0000 UTC m=+3011.172155443" watchObservedRunningTime="2026-02-01 08:12:18.053037624 +0000 UTC m=+3011.173474068" Feb 01 08:12:18 crc kubenswrapper[4835]: I0201 08:12:18.451189 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6wxb"] Feb 01 08:12:18 crc kubenswrapper[4835]: W0201 08:12:18.473726 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc90c5237_f023_4eab_b902_e86f65ad245e.slice/crio-64a069c0b4701b893153030422257b773fb329cea9fbbba5ba7eb08fccd5f729 WatchSource:0}: Error finding container 64a069c0b4701b893153030422257b773fb329cea9fbbba5ba7eb08fccd5f729: Status 404 returned error can't find the container with id 64a069c0b4701b893153030422257b773fb329cea9fbbba5ba7eb08fccd5f729 Feb 01 08:12:18 crc kubenswrapper[4835]: I0201 08:12:18.567111 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:12:18 crc kubenswrapper[4835]: I0201 08:12:18.567208 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:12:18 crc kubenswrapper[4835]: I0201 08:12:18.567235 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:12:18 crc kubenswrapper[4835]: I0201 08:12:18.567320 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:12:18 crc kubenswrapper[4835]: I0201 08:12:18.567361 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:12:18 crc kubenswrapper[4835]: E0201 08:12:18.570652 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:12:19 crc kubenswrapper[4835]: I0201 08:12:19.043540 4835 generic.go:334] "Generic (PLEG): container finished" podID="c90c5237-f023-4eab-b902-e86f65ad245e" containerID="93f93e04d07dd9952f9910d6a6142e0a0e711c59737c2a3528a5b1405391d8eb" exitCode=0 Feb 01 08:12:19 crc kubenswrapper[4835]: I0201 08:12:19.043622 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6wxb" event={"ID":"c90c5237-f023-4eab-b902-e86f65ad245e","Type":"ContainerDied","Data":"93f93e04d07dd9952f9910d6a6142e0a0e711c59737c2a3528a5b1405391d8eb"} Feb 01 08:12:19 crc kubenswrapper[4835]: I0201 08:12:19.043693 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6wxb" event={"ID":"c90c5237-f023-4eab-b902-e86f65ad245e","Type":"ContainerStarted","Data":"64a069c0b4701b893153030422257b773fb329cea9fbbba5ba7eb08fccd5f729"} Feb 01 08:12:20 crc kubenswrapper[4835]: I0201 08:12:20.055484 4835 generic.go:334] "Generic (PLEG): container finished" podID="c90c5237-f023-4eab-b902-e86f65ad245e" containerID="71d1bcb8cf09316b240e912a0cd55b9e033653be06919c5b8fd25c715b25b972" exitCode=0 Feb 01 08:12:20 crc kubenswrapper[4835]: I0201 08:12:20.055836 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6wxb" event={"ID":"c90c5237-f023-4eab-b902-e86f65ad245e","Type":"ContainerDied","Data":"71d1bcb8cf09316b240e912a0cd55b9e033653be06919c5b8fd25c715b25b972"} Feb 01 08:12:21 crc kubenswrapper[4835]: I0201 08:12:21.076937 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6wxb" event={"ID":"c90c5237-f023-4eab-b902-e86f65ad245e","Type":"ContainerStarted","Data":"a3e342f4cc3d9d80d0cf07fb396f5d94d0890fcb4992a1a13698a0ae50be4930"} Feb 01 08:12:21 crc kubenswrapper[4835]: I0201 08:12:21.121114 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g6wxb" podStartSLOduration=2.648621941 podStartE2EDuration="4.121088874s" podCreationTimestamp="2026-02-01 08:12:17 +0000 UTC" firstStartedPulling="2026-02-01 08:12:19.045893486 +0000 UTC m=+3012.166329920" lastFinishedPulling="2026-02-01 08:12:20.518360409 +0000 UTC m=+3013.638796853" observedRunningTime="2026-02-01 08:12:21.109612872 +0000 UTC m=+3014.230049336" watchObservedRunningTime="2026-02-01 08:12:21.121088874 +0000 UTC m=+3014.241525338" Feb 01 08:12:21 crc kubenswrapper[4835]: I0201 08:12:21.567243 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:12:21 crc kubenswrapper[4835]: I0201 08:12:21.567352 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:12:21 crc kubenswrapper[4835]: I0201 08:12:21.567543 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:12:21 crc kubenswrapper[4835]: E0201 08:12:21.567913 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:12:21 crc kubenswrapper[4835]: I0201 08:12:21.568198 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:12:21 crc kubenswrapper[4835]: I0201 08:12:21.568402 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:12:21 crc kubenswrapper[4835]: I0201 08:12:21.568683 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:12:21 crc kubenswrapper[4835]: E0201 08:12:21.569155 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:12:23 crc kubenswrapper[4835]: I0201 08:12:23.908401 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:23 crc kubenswrapper[4835]: I0201 08:12:23.908702 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:23 crc kubenswrapper[4835]: I0201 08:12:23.964241 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:24 crc kubenswrapper[4835]: I0201 08:12:24.175753 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:26 crc kubenswrapper[4835]: I0201 08:12:26.146755 4835 generic.go:334] "Generic (PLEG): container finished" podID="dfdcbe67-d5e0-4882-b2d9-e039513a25f0" containerID="ff70e5a46efa9a4fc239271d5d64d594dab2c4bc357cd62c2841710559b957e6" exitCode=0 Feb 01 08:12:26 crc kubenswrapper[4835]: I0201 08:12:26.146859 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vwdqc/must-gather-c7xxg" event={"ID":"dfdcbe67-d5e0-4882-b2d9-e039513a25f0","Type":"ContainerDied","Data":"ff70e5a46efa9a4fc239271d5d64d594dab2c4bc357cd62c2841710559b957e6"} Feb 01 08:12:26 crc kubenswrapper[4835]: I0201 08:12:26.148817 4835 scope.go:117] "RemoveContainer" containerID="ff70e5a46efa9a4fc239271d5d64d594dab2c4bc357cd62c2841710559b957e6" Feb 01 08:12:26 crc kubenswrapper[4835]: I0201 08:12:26.534003 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vwdqc_must-gather-c7xxg_dfdcbe67-d5e0-4882-b2d9-e039513a25f0/gather/0.log" Feb 01 08:12:26 crc kubenswrapper[4835]: I0201 08:12:26.761668 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ht7np"] Feb 01 08:12:26 crc kubenswrapper[4835]: I0201 08:12:26.761975 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ht7np" podUID="83bc0253-a027-4b59-ae32-e1c1279057c8" containerName="registry-server" containerID="cri-o://9112aca4ab0f886aa1e24b4cfd392caaf93cbfb45f02e398024e566fc9d33796" gracePeriod=2 Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.159751 4835 generic.go:334] "Generic (PLEG): container finished" podID="83bc0253-a027-4b59-ae32-e1c1279057c8" containerID="9112aca4ab0f886aa1e24b4cfd392caaf93cbfb45f02e398024e566fc9d33796" exitCode=0 Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.159792 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ht7np" event={"ID":"83bc0253-a027-4b59-ae32-e1c1279057c8","Type":"ContainerDied","Data":"9112aca4ab0f886aa1e24b4cfd392caaf93cbfb45f02e398024e566fc9d33796"} Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.159817 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ht7np" event={"ID":"83bc0253-a027-4b59-ae32-e1c1279057c8","Type":"ContainerDied","Data":"1adfdeac740f8473883190bbbf16ff1b597929b666ea3516bfd2b4a2d6d415b6"} Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.159829 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1adfdeac740f8473883190bbbf16ff1b597929b666ea3516bfd2b4a2d6d415b6" Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.194771 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.256061 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dhhb\" (UniqueName: \"kubernetes.io/projected/83bc0253-a027-4b59-ae32-e1c1279057c8-kube-api-access-8dhhb\") pod \"83bc0253-a027-4b59-ae32-e1c1279057c8\" (UID: \"83bc0253-a027-4b59-ae32-e1c1279057c8\") " Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.256137 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83bc0253-a027-4b59-ae32-e1c1279057c8-utilities\") pod \"83bc0253-a027-4b59-ae32-e1c1279057c8\" (UID: \"83bc0253-a027-4b59-ae32-e1c1279057c8\") " Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.256296 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83bc0253-a027-4b59-ae32-e1c1279057c8-catalog-content\") pod \"83bc0253-a027-4b59-ae32-e1c1279057c8\" (UID: \"83bc0253-a027-4b59-ae32-e1c1279057c8\") " Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.257154 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83bc0253-a027-4b59-ae32-e1c1279057c8-utilities" (OuterVolumeSpecName: "utilities") pod "83bc0253-a027-4b59-ae32-e1c1279057c8" (UID: "83bc0253-a027-4b59-ae32-e1c1279057c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.261891 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83bc0253-a027-4b59-ae32-e1c1279057c8-kube-api-access-8dhhb" (OuterVolumeSpecName: "kube-api-access-8dhhb") pod "83bc0253-a027-4b59-ae32-e1c1279057c8" (UID: "83bc0253-a027-4b59-ae32-e1c1279057c8"). InnerVolumeSpecName "kube-api-access-8dhhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.314745 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83bc0253-a027-4b59-ae32-e1c1279057c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83bc0253-a027-4b59-ae32-e1c1279057c8" (UID: "83bc0253-a027-4b59-ae32-e1c1279057c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.358490 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dhhb\" (UniqueName: \"kubernetes.io/projected/83bc0253-a027-4b59-ae32-e1c1279057c8-kube-api-access-8dhhb\") on node \"crc\" DevicePath \"\"" Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.358531 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83bc0253-a027-4b59-ae32-e1c1279057c8-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.358546 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83bc0253-a027-4b59-ae32-e1c1279057c8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.958468 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:27 crc kubenswrapper[4835]: I0201 08:12:27.958898 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:28 crc kubenswrapper[4835]: I0201 08:12:28.095517 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:28 crc kubenswrapper[4835]: I0201 08:12:28.170099 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ht7np" Feb 01 08:12:28 crc kubenswrapper[4835]: I0201 08:12:28.197015 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ht7np"] Feb 01 08:12:28 crc kubenswrapper[4835]: I0201 08:12:28.202549 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ht7np"] Feb 01 08:12:28 crc kubenswrapper[4835]: I0201 08:12:28.209337 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:29 crc kubenswrapper[4835]: I0201 08:12:29.568131 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:12:29 crc kubenswrapper[4835]: E0201 08:12:29.568375 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:12:29 crc kubenswrapper[4835]: I0201 08:12:29.578909 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83bc0253-a027-4b59-ae32-e1c1279057c8" path="/var/lib/kubelet/pods/83bc0253-a027-4b59-ae32-e1c1279057c8/volumes" Feb 01 08:12:30 crc kubenswrapper[4835]: I0201 08:12:30.567259 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:12:30 crc kubenswrapper[4835]: I0201 08:12:30.567614 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:12:30 crc kubenswrapper[4835]: E0201 08:12:30.567815 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:12:31 crc kubenswrapper[4835]: I0201 08:12:31.567617 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:12:31 crc kubenswrapper[4835]: I0201 08:12:31.568462 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:12:31 crc kubenswrapper[4835]: E0201 08:12:31.569047 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:12:32 crc kubenswrapper[4835]: I0201 08:12:32.766682 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6wxb"] Feb 01 08:12:32 crc kubenswrapper[4835]: I0201 08:12:32.767351 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g6wxb" podUID="c90c5237-f023-4eab-b902-e86f65ad245e" containerName="registry-server" containerID="cri-o://a3e342f4cc3d9d80d0cf07fb396f5d94d0890fcb4992a1a13698a0ae50be4930" gracePeriod=2 Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.240351 4835 generic.go:334] "Generic (PLEG): container finished" podID="c90c5237-f023-4eab-b902-e86f65ad245e" containerID="a3e342f4cc3d9d80d0cf07fb396f5d94d0890fcb4992a1a13698a0ae50be4930" exitCode=0 Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.240840 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6wxb" event={"ID":"c90c5237-f023-4eab-b902-e86f65ad245e","Type":"ContainerDied","Data":"a3e342f4cc3d9d80d0cf07fb396f5d94d0890fcb4992a1a13698a0ae50be4930"} Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.308185 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.370185 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c90c5237-f023-4eab-b902-e86f65ad245e-utilities\") pod \"c90c5237-f023-4eab-b902-e86f65ad245e\" (UID: \"c90c5237-f023-4eab-b902-e86f65ad245e\") " Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.370275 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c90c5237-f023-4eab-b902-e86f65ad245e-catalog-content\") pod \"c90c5237-f023-4eab-b902-e86f65ad245e\" (UID: \"c90c5237-f023-4eab-b902-e86f65ad245e\") " Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.370400 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pvh8\" (UniqueName: \"kubernetes.io/projected/c90c5237-f023-4eab-b902-e86f65ad245e-kube-api-access-6pvh8\") pod \"c90c5237-f023-4eab-b902-e86f65ad245e\" (UID: \"c90c5237-f023-4eab-b902-e86f65ad245e\") " Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.371396 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c90c5237-f023-4eab-b902-e86f65ad245e-utilities" (OuterVolumeSpecName: "utilities") pod "c90c5237-f023-4eab-b902-e86f65ad245e" (UID: "c90c5237-f023-4eab-b902-e86f65ad245e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.377016 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c90c5237-f023-4eab-b902-e86f65ad245e-kube-api-access-6pvh8" (OuterVolumeSpecName: "kube-api-access-6pvh8") pod "c90c5237-f023-4eab-b902-e86f65ad245e" (UID: "c90c5237-f023-4eab-b902-e86f65ad245e"). InnerVolumeSpecName "kube-api-access-6pvh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.395943 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c90c5237-f023-4eab-b902-e86f65ad245e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c90c5237-f023-4eab-b902-e86f65ad245e" (UID: "c90c5237-f023-4eab-b902-e86f65ad245e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.472781 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c90c5237-f023-4eab-b902-e86f65ad245e-utilities\") on node \"crc\" DevicePath \"\"" Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.472825 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c90c5237-f023-4eab-b902-e86f65ad245e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.472842 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pvh8\" (UniqueName: \"kubernetes.io/projected/c90c5237-f023-4eab-b902-e86f65ad245e-kube-api-access-6pvh8\") on node \"crc\" DevicePath \"\"" Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.569592 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.569685 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.569714 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.569830 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.569876 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:12:33 crc kubenswrapper[4835]: E0201 08:12:33.570240 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.862380 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vwdqc/must-gather-c7xxg"] Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.863184 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-vwdqc/must-gather-c7xxg" podUID="dfdcbe67-d5e0-4882-b2d9-e039513a25f0" containerName="copy" containerID="cri-o://275d139ef89b68c8944a866b1f7eaf25618c1648a86d84e9198e1e0ac33871b7" gracePeriod=2 Feb 01 08:12:33 crc kubenswrapper[4835]: I0201 08:12:33.868614 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vwdqc/must-gather-c7xxg"] Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.237053 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vwdqc_must-gather-c7xxg_dfdcbe67-d5e0-4882-b2d9-e039513a25f0/copy/0.log" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.237985 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwdqc/must-gather-c7xxg" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.249605 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vwdqc_must-gather-c7xxg_dfdcbe67-d5e0-4882-b2d9-e039513a25f0/copy/0.log" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.250039 4835 generic.go:334] "Generic (PLEG): container finished" podID="dfdcbe67-d5e0-4882-b2d9-e039513a25f0" containerID="275d139ef89b68c8944a866b1f7eaf25618c1648a86d84e9198e1e0ac33871b7" exitCode=143 Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.250116 4835 scope.go:117] "RemoveContainer" containerID="275d139ef89b68c8944a866b1f7eaf25618c1648a86d84e9198e1e0ac33871b7" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.250251 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vwdqc/must-gather-c7xxg" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.254104 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g6wxb" event={"ID":"c90c5237-f023-4eab-b902-e86f65ad245e","Type":"ContainerDied","Data":"64a069c0b4701b893153030422257b773fb329cea9fbbba5ba7eb08fccd5f729"} Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.254198 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g6wxb" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.287071 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6wxb"] Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.287747 4835 scope.go:117] "RemoveContainer" containerID="ff70e5a46efa9a4fc239271d5d64d594dab2c4bc357cd62c2841710559b957e6" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.292569 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g6wxb"] Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.292766 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2blml\" (UniqueName: \"kubernetes.io/projected/dfdcbe67-d5e0-4882-b2d9-e039513a25f0-kube-api-access-2blml\") pod \"dfdcbe67-d5e0-4882-b2d9-e039513a25f0\" (UID: \"dfdcbe67-d5e0-4882-b2d9-e039513a25f0\") " Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.292989 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dfdcbe67-d5e0-4882-b2d9-e039513a25f0-must-gather-output\") pod \"dfdcbe67-d5e0-4882-b2d9-e039513a25f0\" (UID: \"dfdcbe67-d5e0-4882-b2d9-e039513a25f0\") " Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.300624 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfdcbe67-d5e0-4882-b2d9-e039513a25f0-kube-api-access-2blml" (OuterVolumeSpecName: "kube-api-access-2blml") pod "dfdcbe67-d5e0-4882-b2d9-e039513a25f0" (UID: "dfdcbe67-d5e0-4882-b2d9-e039513a25f0"). InnerVolumeSpecName "kube-api-access-2blml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.341259 4835 scope.go:117] "RemoveContainer" containerID="275d139ef89b68c8944a866b1f7eaf25618c1648a86d84e9198e1e0ac33871b7" Feb 01 08:12:34 crc kubenswrapper[4835]: E0201 08:12:34.341923 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"275d139ef89b68c8944a866b1f7eaf25618c1648a86d84e9198e1e0ac33871b7\": container with ID starting with 275d139ef89b68c8944a866b1f7eaf25618c1648a86d84e9198e1e0ac33871b7 not found: ID does not exist" containerID="275d139ef89b68c8944a866b1f7eaf25618c1648a86d84e9198e1e0ac33871b7" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.341970 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"275d139ef89b68c8944a866b1f7eaf25618c1648a86d84e9198e1e0ac33871b7"} err="failed to get container status \"275d139ef89b68c8944a866b1f7eaf25618c1648a86d84e9198e1e0ac33871b7\": rpc error: code = NotFound desc = could not find container \"275d139ef89b68c8944a866b1f7eaf25618c1648a86d84e9198e1e0ac33871b7\": container with ID starting with 275d139ef89b68c8944a866b1f7eaf25618c1648a86d84e9198e1e0ac33871b7 not found: ID does not exist" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.342000 4835 scope.go:117] "RemoveContainer" containerID="ff70e5a46efa9a4fc239271d5d64d594dab2c4bc357cd62c2841710559b957e6" Feb 01 08:12:34 crc kubenswrapper[4835]: E0201 08:12:34.342480 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff70e5a46efa9a4fc239271d5d64d594dab2c4bc357cd62c2841710559b957e6\": container with ID starting with ff70e5a46efa9a4fc239271d5d64d594dab2c4bc357cd62c2841710559b957e6 not found: ID does not exist" containerID="ff70e5a46efa9a4fc239271d5d64d594dab2c4bc357cd62c2841710559b957e6" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.342546 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff70e5a46efa9a4fc239271d5d64d594dab2c4bc357cd62c2841710559b957e6"} err="failed to get container status \"ff70e5a46efa9a4fc239271d5d64d594dab2c4bc357cd62c2841710559b957e6\": rpc error: code = NotFound desc = could not find container \"ff70e5a46efa9a4fc239271d5d64d594dab2c4bc357cd62c2841710559b957e6\": container with ID starting with ff70e5a46efa9a4fc239271d5d64d594dab2c4bc357cd62c2841710559b957e6 not found: ID does not exist" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.342566 4835 scope.go:117] "RemoveContainer" containerID="a3e342f4cc3d9d80d0cf07fb396f5d94d0890fcb4992a1a13698a0ae50be4930" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.375503 4835 scope.go:117] "RemoveContainer" containerID="71d1bcb8cf09316b240e912a0cd55b9e033653be06919c5b8fd25c715b25b972" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.384167 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfdcbe67-d5e0-4882-b2d9-e039513a25f0-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "dfdcbe67-d5e0-4882-b2d9-e039513a25f0" (UID: "dfdcbe67-d5e0-4882-b2d9-e039513a25f0"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.395272 4835 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dfdcbe67-d5e0-4882-b2d9-e039513a25f0-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.395312 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2blml\" (UniqueName: \"kubernetes.io/projected/dfdcbe67-d5e0-4882-b2d9-e039513a25f0-kube-api-access-2blml\") on node \"crc\" DevicePath \"\"" Feb 01 08:12:34 crc kubenswrapper[4835]: I0201 08:12:34.400742 4835 scope.go:117] "RemoveContainer" containerID="93f93e04d07dd9952f9910d6a6142e0a0e711c59737c2a3528a5b1405391d8eb" Feb 01 08:12:35 crc kubenswrapper[4835]: I0201 08:12:35.567352 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:12:35 crc kubenswrapper[4835]: I0201 08:12:35.567972 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:12:35 crc kubenswrapper[4835]: I0201 08:12:35.568025 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:12:35 crc kubenswrapper[4835]: I0201 08:12:35.568080 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:12:35 crc kubenswrapper[4835]: I0201 08:12:35.568100 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:12:35 crc kubenswrapper[4835]: I0201 08:12:35.568209 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:12:35 crc kubenswrapper[4835]: E0201 08:12:35.568450 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:12:35 crc kubenswrapper[4835]: E0201 08:12:35.568507 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:12:35 crc kubenswrapper[4835]: I0201 08:12:35.580359 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c90c5237-f023-4eab-b902-e86f65ad245e" path="/var/lib/kubelet/pods/c90c5237-f023-4eab-b902-e86f65ad245e/volumes" Feb 01 08:12:35 crc kubenswrapper[4835]: I0201 08:12:35.581613 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfdcbe67-d5e0-4882-b2d9-e039513a25f0" path="/var/lib/kubelet/pods/dfdcbe67-d5e0-4882-b2d9-e039513a25f0/volumes" Feb 01 08:12:40 crc kubenswrapper[4835]: I0201 08:12:40.566983 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:12:40 crc kubenswrapper[4835]: E0201 08:12:40.567785 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:12:44 crc kubenswrapper[4835]: I0201 08:12:44.567643 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:12:44 crc kubenswrapper[4835]: I0201 08:12:44.567976 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:12:44 crc kubenswrapper[4835]: E0201 08:12:44.568345 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:12:45 crc kubenswrapper[4835]: I0201 08:12:45.567362 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:12:45 crc kubenswrapper[4835]: I0201 08:12:45.567390 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:12:45 crc kubenswrapper[4835]: E0201 08:12:45.567581 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:12:47 crc kubenswrapper[4835]: I0201 08:12:47.575039 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:12:47 crc kubenswrapper[4835]: I0201 08:12:47.575542 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:12:47 crc kubenswrapper[4835]: I0201 08:12:47.575720 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:12:47 crc kubenswrapper[4835]: E0201 08:12:47.576170 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:12:48 crc kubenswrapper[4835]: I0201 08:12:48.566723 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:12:48 crc kubenswrapper[4835]: I0201 08:12:48.567095 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:12:48 crc kubenswrapper[4835]: I0201 08:12:48.567186 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:12:48 crc kubenswrapper[4835]: I0201 08:12:48.567295 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:12:48 crc kubenswrapper[4835]: I0201 08:12:48.567378 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:12:48 crc kubenswrapper[4835]: E0201 08:12:48.567785 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:12:50 crc kubenswrapper[4835]: I0201 08:12:50.569144 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:12:50 crc kubenswrapper[4835]: I0201 08:12:50.570314 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:12:50 crc kubenswrapper[4835]: I0201 08:12:50.570839 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:12:50 crc kubenswrapper[4835]: E0201 08:12:50.572026 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:12:52 crc kubenswrapper[4835]: I0201 08:12:52.566440 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:12:52 crc kubenswrapper[4835]: E0201 08:12:52.566811 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:12:55 crc kubenswrapper[4835]: I0201 08:12:55.567012 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:12:55 crc kubenswrapper[4835]: I0201 08:12:55.567311 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:12:55 crc kubenswrapper[4835]: E0201 08:12:55.567558 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:12:59 crc kubenswrapper[4835]: I0201 08:12:59.567337 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:12:59 crc kubenswrapper[4835]: I0201 08:12:59.567789 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:12:59 crc kubenswrapper[4835]: I0201 08:12:59.567907 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:12:59 crc kubenswrapper[4835]: E0201 08:12:59.568283 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:13:00 crc kubenswrapper[4835]: I0201 08:13:00.567046 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:13:00 crc kubenswrapper[4835]: I0201 08:13:00.567096 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:13:00 crc kubenswrapper[4835]: I0201 08:13:00.567960 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:13:00 crc kubenswrapper[4835]: I0201 08:13:00.568142 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:13:00 crc kubenswrapper[4835]: I0201 08:13:00.568192 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:13:00 crc kubenswrapper[4835]: I0201 08:13:00.568325 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:13:00 crc kubenswrapper[4835]: I0201 08:13:00.568395 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:13:00 crc kubenswrapper[4835]: E0201 08:13:00.569196 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:13:00 crc kubenswrapper[4835]: E0201 08:13:00.807284 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:13:01 crc kubenswrapper[4835]: I0201 08:13:01.541554 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"6a913b6ae50136af191cf5b4dbaef03f3230b919285acfb6297aab38c6ca55fa"} Feb 01 08:13:01 crc kubenswrapper[4835]: I0201 08:13:01.542510 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:13:01 crc kubenswrapper[4835]: I0201 08:13:01.542665 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:13:01 crc kubenswrapper[4835]: E0201 08:13:01.542855 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:13:02 crc kubenswrapper[4835]: I0201 08:13:02.553394 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:13:02 crc kubenswrapper[4835]: E0201 08:13:02.553816 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:13:04 crc kubenswrapper[4835]: I0201 08:13:04.567561 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:13:04 crc kubenswrapper[4835]: I0201 08:13:04.567770 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:13:04 crc kubenswrapper[4835]: I0201 08:13:04.568030 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:13:04 crc kubenswrapper[4835]: E0201 08:13:04.568632 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:13:06 crc kubenswrapper[4835]: I0201 08:13:06.540742 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:13:07 crc kubenswrapper[4835]: I0201 08:13:07.536875 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:13:07 crc kubenswrapper[4835]: I0201 08:13:07.574693 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:13:07 crc kubenswrapper[4835]: E0201 08:13:07.575047 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:13:07 crc kubenswrapper[4835]: I0201 08:13:07.602685 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices\") pod \"swift-ring-rebalance-w2wt7\" (UID: \"b45c05e1-195b-43c0-a44d-1d1c50886dfc\") " pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:13:07 crc kubenswrapper[4835]: E0201 08:13:07.602937 4835 configmap.go:193] Couldn't get configMap swift-kuttl-tests/swift-ring-config-data: configmap "swift-ring-config-data" not found Feb 01 08:13:07 crc kubenswrapper[4835]: E0201 08:13:07.603113 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices podName:b45c05e1-195b-43c0-a44d-1d1c50886dfc nodeName:}" failed. No retries permitted until 2026-02-01 08:15:09.603079645 +0000 UTC m=+3182.723516089 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "ring-data-devices" (UniqueName: "kubernetes.io/configmap/b45c05e1-195b-43c0-a44d-1d1c50886dfc-ring-data-devices") pod "swift-ring-rebalance-w2wt7" (UID: "b45c05e1-195b-43c0-a44d-1d1c50886dfc") : configmap "swift-ring-config-data" not found Feb 01 08:13:09 crc kubenswrapper[4835]: I0201 08:13:09.537630 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:13:10 crc kubenswrapper[4835]: I0201 08:13:10.567672 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:13:10 crc kubenswrapper[4835]: I0201 08:13:10.568126 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:13:10 crc kubenswrapper[4835]: E0201 08:13:10.568563 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:13:12 crc kubenswrapper[4835]: I0201 08:13:12.537444 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:13:12 crc kubenswrapper[4835]: I0201 08:13:12.537928 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:13:12 crc kubenswrapper[4835]: I0201 08:13:12.537980 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:13:12 crc kubenswrapper[4835]: I0201 08:13:12.538691 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"6a913b6ae50136af191cf5b4dbaef03f3230b919285acfb6297aab38c6ca55fa"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 08:13:12 crc kubenswrapper[4835]: I0201 08:13:12.538724 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:13:12 crc kubenswrapper[4835]: I0201 08:13:12.538752 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://6a913b6ae50136af191cf5b4dbaef03f3230b919285acfb6297aab38c6ca55fa" gracePeriod=30 Feb 01 08:13:12 crc kubenswrapper[4835]: I0201 08:13:12.539881 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:13:12 crc kubenswrapper[4835]: E0201 08:13:12.833779 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:13:13 crc kubenswrapper[4835]: I0201 08:13:13.661848 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="6a913b6ae50136af191cf5b4dbaef03f3230b919285acfb6297aab38c6ca55fa" exitCode=0 Feb 01 08:13:13 crc kubenswrapper[4835]: I0201 08:13:13.661903 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"6a913b6ae50136af191cf5b4dbaef03f3230b919285acfb6297aab38c6ca55fa"} Feb 01 08:13:13 crc kubenswrapper[4835]: I0201 08:13:13.661933 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerStarted","Data":"ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11"} Feb 01 08:13:13 crc kubenswrapper[4835]: I0201 08:13:13.661953 4835 scope.go:117] "RemoveContainer" containerID="89b0b1edbf45201a1962b86ffd4019b493a8265f97c736e48cf20dcce90fa2a8" Feb 01 08:13:13 crc kubenswrapper[4835]: I0201 08:13:13.662592 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:13:13 crc kubenswrapper[4835]: E0201 08:13:13.662951 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:13:13 crc kubenswrapper[4835]: I0201 08:13:13.663135 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:13:14 crc kubenswrapper[4835]: I0201 08:13:14.567577 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:13:14 crc kubenswrapper[4835]: I0201 08:13:14.567927 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:13:14 crc kubenswrapper[4835]: I0201 08:13:14.567995 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:13:14 crc kubenswrapper[4835]: I0201 08:13:14.568027 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:13:14 crc kubenswrapper[4835]: I0201 08:13:14.568119 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:13:14 crc kubenswrapper[4835]: I0201 08:13:14.568167 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:13:14 crc kubenswrapper[4835]: I0201 08:13:14.568226 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:13:14 crc kubenswrapper[4835]: I0201 08:13:14.568261 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:13:14 crc kubenswrapper[4835]: E0201 08:13:14.568643 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:13:14 crc kubenswrapper[4835]: E0201 08:13:14.568697 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:13:14 crc kubenswrapper[4835]: I0201 08:13:14.672931 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:13:14 crc kubenswrapper[4835]: E0201 08:13:14.673152 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:13:15 crc kubenswrapper[4835]: I0201 08:13:15.568184 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:13:15 crc kubenswrapper[4835]: I0201 08:13:15.568271 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:13:15 crc kubenswrapper[4835]: I0201 08:13:15.568354 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:13:15 crc kubenswrapper[4835]: E0201 08:13:15.568692 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:13:17 crc kubenswrapper[4835]: I0201 08:13:17.538048 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:13:18 crc kubenswrapper[4835]: I0201 08:13:18.538040 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:13:18 crc kubenswrapper[4835]: I0201 08:13:18.567507 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:13:18 crc kubenswrapper[4835]: E0201 08:13:18.567800 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:13:21 crc kubenswrapper[4835]: I0201 08:13:21.539885 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:13:21 crc kubenswrapper[4835]: I0201 08:13:21.566729 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:13:21 crc kubenswrapper[4835]: I0201 08:13:21.566757 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:13:21 crc kubenswrapper[4835]: E0201 08:13:21.566985 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:13:22 crc kubenswrapper[4835]: I0201 08:13:22.537650 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:13:24 crc kubenswrapper[4835]: I0201 08:13:24.537731 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:13:24 crc kubenswrapper[4835]: I0201 08:13:24.537832 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" Feb 01 08:13:24 crc kubenswrapper[4835]: I0201 08:13:24.538786 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="proxy-httpd" containerStatusID={"Type":"cri-o","ID":"ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11"} pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" containerMessage="Container proxy-httpd failed liveness probe, will be restarted" Feb 01 08:13:24 crc kubenswrapper[4835]: I0201 08:13:24.538820 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:13:24 crc kubenswrapper[4835]: I0201 08:13:24.538859 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" containerID="cri-o://ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11" gracePeriod=30 Feb 01 08:13:24 crc kubenswrapper[4835]: I0201 08:13:24.539314 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 01 08:13:24 crc kubenswrapper[4835]: E0201 08:13:24.673937 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:13:24 crc kubenswrapper[4835]: I0201 08:13:24.767308 4835 generic.go:334] "Generic (PLEG): container finished" podID="8ccb8908-ffc6-4032-8907-da7491bf9304" containerID="ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11" exitCode=0 Feb 01 08:13:24 crc kubenswrapper[4835]: I0201 08:13:24.767344 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" event={"ID":"8ccb8908-ffc6-4032-8907-da7491bf9304","Type":"ContainerDied","Data":"ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11"} Feb 01 08:13:24 crc kubenswrapper[4835]: I0201 08:13:24.767781 4835 scope.go:117] "RemoveContainer" containerID="6a913b6ae50136af191cf5b4dbaef03f3230b919285acfb6297aab38c6ca55fa" Feb 01 08:13:24 crc kubenswrapper[4835]: I0201 08:13:24.768850 4835 scope.go:117] "RemoveContainer" containerID="ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11" Feb 01 08:13:24 crc kubenswrapper[4835]: I0201 08:13:24.768923 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:13:24 crc kubenswrapper[4835]: E0201 08:13:24.769371 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:13:25 crc kubenswrapper[4835]: E0201 08:13:25.449466 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ring-data-devices], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" podUID="b45c05e1-195b-43c0-a44d-1d1c50886dfc" Feb 01 08:13:25 crc kubenswrapper[4835]: I0201 08:13:25.781013 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-w2wt7" Feb 01 08:13:26 crc kubenswrapper[4835]: I0201 08:13:26.567605 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:13:26 crc kubenswrapper[4835]: I0201 08:13:26.567716 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:13:26 crc kubenswrapper[4835]: I0201 08:13:26.567761 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:13:26 crc kubenswrapper[4835]: I0201 08:13:26.567890 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:13:26 crc kubenswrapper[4835]: I0201 08:13:26.567955 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:13:26 crc kubenswrapper[4835]: E0201 08:13:26.568394 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:13:29 crc kubenswrapper[4835]: I0201 08:13:29.567368 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:13:29 crc kubenswrapper[4835]: I0201 08:13:29.568064 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:13:29 crc kubenswrapper[4835]: I0201 08:13:29.568191 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:13:29 crc kubenswrapper[4835]: I0201 08:13:29.568281 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:13:29 crc kubenswrapper[4835]: I0201 08:13:29.568439 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:13:29 crc kubenswrapper[4835]: I0201 08:13:29.568519 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:13:29 crc kubenswrapper[4835]: I0201 08:13:29.568616 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:13:29 crc kubenswrapper[4835]: E0201 08:13:29.568807 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:13:29 crc kubenswrapper[4835]: E0201 08:13:29.568921 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:13:29 crc kubenswrapper[4835]: E0201 08:13:29.568924 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:13:33 crc kubenswrapper[4835]: I0201 08:13:33.569468 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:13:33 crc kubenswrapper[4835]: I0201 08:13:33.571606 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:13:33 crc kubenswrapper[4835]: E0201 08:13:33.575016 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:13:37 crc kubenswrapper[4835]: I0201 08:13:37.571121 4835 scope.go:117] "RemoveContainer" containerID="ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11" Feb 01 08:13:37 crc kubenswrapper[4835]: I0201 08:13:37.571450 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:13:37 crc kubenswrapper[4835]: E0201 08:13:37.571662 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:13:40 crc kubenswrapper[4835]: I0201 08:13:40.567300 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:13:40 crc kubenswrapper[4835]: I0201 08:13:40.567707 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:13:40 crc kubenswrapper[4835]: I0201 08:13:40.567740 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:13:40 crc kubenswrapper[4835]: I0201 08:13:40.567817 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:13:40 crc kubenswrapper[4835]: I0201 08:13:40.567860 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:13:40 crc kubenswrapper[4835]: E0201 08:13:40.568199 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:13:41 crc kubenswrapper[4835]: I0201 08:13:41.568042 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:13:41 crc kubenswrapper[4835]: I0201 08:13:41.569200 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:13:41 crc kubenswrapper[4835]: I0201 08:13:41.569400 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:13:41 crc kubenswrapper[4835]: E0201 08:13:41.569950 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:13:43 crc kubenswrapper[4835]: I0201 08:13:43.567503 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:13:43 crc kubenswrapper[4835]: I0201 08:13:43.567559 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:13:43 crc kubenswrapper[4835]: I0201 08:13:43.567643 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:13:43 crc kubenswrapper[4835]: E0201 08:13:43.567726 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:13:43 crc kubenswrapper[4835]: I0201 08:13:43.567772 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:13:43 crc kubenswrapper[4835]: E0201 08:13:43.568101 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:13:45 crc kubenswrapper[4835]: I0201 08:13:45.567791 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:13:45 crc kubenswrapper[4835]: I0201 08:13:45.568238 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:13:45 crc kubenswrapper[4835]: E0201 08:13:45.568699 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:13:49 crc kubenswrapper[4835]: I0201 08:13:49.567214 4835 scope.go:117] "RemoveContainer" containerID="ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11" Feb 01 08:13:49 crc kubenswrapper[4835]: I0201 08:13:49.567624 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:13:49 crc kubenswrapper[4835]: E0201 08:13:49.567928 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:13:52 crc kubenswrapper[4835]: I0201 08:13:52.568321 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:13:52 crc kubenswrapper[4835]: I0201 08:13:52.569721 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:13:52 crc kubenswrapper[4835]: I0201 08:13:52.569801 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:13:52 crc kubenswrapper[4835]: I0201 08:13:52.569931 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:13:52 crc kubenswrapper[4835]: I0201 08:13:52.570000 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:13:52 crc kubenswrapper[4835]: E0201 08:13:52.570600 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:13:55 crc kubenswrapper[4835]: I0201 08:13:55.567563 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:13:55 crc kubenswrapper[4835]: I0201 08:13:55.567667 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:13:55 crc kubenswrapper[4835]: I0201 08:13:55.567776 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:13:55 crc kubenswrapper[4835]: E0201 08:13:55.568117 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:13:57 crc kubenswrapper[4835]: I0201 08:13:57.573400 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:13:57 crc kubenswrapper[4835]: I0201 08:13:57.573781 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:13:57 crc kubenswrapper[4835]: E0201 08:13:57.574126 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:13:58 crc kubenswrapper[4835]: I0201 08:13:58.072198 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0" exitCode=1 Feb 01 08:13:58 crc kubenswrapper[4835]: I0201 08:13:58.072262 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0"} Feb 01 08:13:58 crc kubenswrapper[4835]: I0201 08:13:58.072306 4835 scope.go:117] "RemoveContainer" containerID="a173a7d4dfce7a09af6df1da942081f7f4d13b9bb491a5259c66bbecc01f055e" Feb 01 08:13:58 crc kubenswrapper[4835]: I0201 08:13:58.073566 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:13:58 crc kubenswrapper[4835]: I0201 08:13:58.073705 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:13:58 crc kubenswrapper[4835]: I0201 08:13:58.073750 4835 scope.go:117] "RemoveContainer" containerID="f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0" Feb 01 08:13:58 crc kubenswrapper[4835]: I0201 08:13:58.073893 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:13:58 crc kubenswrapper[4835]: E0201 08:13:58.074503 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:13:58 crc kubenswrapper[4835]: I0201 08:13:58.567046 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:13:58 crc kubenswrapper[4835]: E0201 08:13:58.567324 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:13:58 crc kubenswrapper[4835]: I0201 08:13:58.567860 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:13:58 crc kubenswrapper[4835]: I0201 08:13:58.567999 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:13:58 crc kubenswrapper[4835]: I0201 08:13:58.568229 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:13:58 crc kubenswrapper[4835]: E0201 08:13:58.568779 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:14:00 crc kubenswrapper[4835]: I0201 08:14:00.567003 4835 scope.go:117] "RemoveContainer" containerID="ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11" Feb 01 08:14:00 crc kubenswrapper[4835]: I0201 08:14:00.567040 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:14:00 crc kubenswrapper[4835]: E0201 08:14:00.567241 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:14:01 crc kubenswrapper[4835]: I0201 08:14:01.120305 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="3b1bb3af0e5732f220334b3cd370553b1ddcc245875cfa3539320ae4bb4a8f28" exitCode=1 Feb 01 08:14:01 crc kubenswrapper[4835]: I0201 08:14:01.120386 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"3b1bb3af0e5732f220334b3cd370553b1ddcc245875cfa3539320ae4bb4a8f28"} Feb 01 08:14:01 crc kubenswrapper[4835]: I0201 08:14:01.120812 4835 scope.go:117] "RemoveContainer" containerID="3f2186ff77af1c47eb15deb97901f7226557ec5b2ecb431045e2538fb29d941c" Feb 01 08:14:01 crc kubenswrapper[4835]: I0201 08:14:01.121592 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:14:01 crc kubenswrapper[4835]: I0201 08:14:01.121698 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:14:01 crc kubenswrapper[4835]: I0201 08:14:01.121738 4835 scope.go:117] "RemoveContainer" containerID="f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0" Feb 01 08:14:01 crc kubenswrapper[4835]: I0201 08:14:01.121823 4835 scope.go:117] "RemoveContainer" containerID="3b1bb3af0e5732f220334b3cd370553b1ddcc245875cfa3539320ae4bb4a8f28" Feb 01 08:14:01 crc kubenswrapper[4835]: I0201 08:14:01.121857 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:14:01 crc kubenswrapper[4835]: E0201 08:14:01.122516 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:14:03 crc kubenswrapper[4835]: I0201 08:14:03.567970 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:14:03 crc kubenswrapper[4835]: I0201 08:14:03.568464 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:14:03 crc kubenswrapper[4835]: I0201 08:14:03.568507 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:14:03 crc kubenswrapper[4835]: I0201 08:14:03.568614 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:14:03 crc kubenswrapper[4835]: I0201 08:14:03.568671 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:14:03 crc kubenswrapper[4835]: E0201 08:14:03.569732 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:14:08 crc kubenswrapper[4835]: I0201 08:14:08.566967 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:14:08 crc kubenswrapper[4835]: I0201 08:14:08.567354 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:14:08 crc kubenswrapper[4835]: E0201 08:14:08.567727 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:14:10 crc kubenswrapper[4835]: I0201 08:14:10.567889 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:14:10 crc kubenswrapper[4835]: I0201 08:14:10.567975 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:14:10 crc kubenswrapper[4835]: I0201 08:14:10.568093 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:14:10 crc kubenswrapper[4835]: E0201 08:14:10.568400 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:14:12 crc kubenswrapper[4835]: I0201 08:14:12.566627 4835 scope.go:117] "RemoveContainer" containerID="ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11" Feb 01 08:14:12 crc kubenswrapper[4835]: I0201 08:14:12.566958 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:14:12 crc kubenswrapper[4835]: E0201 08:14:12.567270 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:14:13 crc kubenswrapper[4835]: I0201 08:14:13.567342 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:14:13 crc kubenswrapper[4835]: E0201 08:14:13.567649 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:14:14 crc kubenswrapper[4835]: I0201 08:14:14.567498 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:14:14 crc kubenswrapper[4835]: I0201 08:14:14.567585 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:14:14 crc kubenswrapper[4835]: I0201 08:14:14.567614 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:14:14 crc kubenswrapper[4835]: I0201 08:14:14.567695 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:14:14 crc kubenswrapper[4835]: I0201 08:14:14.567738 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:14:14 crc kubenswrapper[4835]: E0201 08:14:14.568158 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:14:15 crc kubenswrapper[4835]: I0201 08:14:15.568971 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:14:15 crc kubenswrapper[4835]: I0201 08:14:15.569593 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:14:15 crc kubenswrapper[4835]: I0201 08:14:15.569640 4835 scope.go:117] "RemoveContainer" containerID="f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0" Feb 01 08:14:15 crc kubenswrapper[4835]: I0201 08:14:15.569739 4835 scope.go:117] "RemoveContainer" containerID="3b1bb3af0e5732f220334b3cd370553b1ddcc245875cfa3539320ae4bb4a8f28" Feb 01 08:14:15 crc kubenswrapper[4835]: I0201 08:14:15.569754 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:14:16 crc kubenswrapper[4835]: E0201 08:14:16.061611 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:14:16 crc kubenswrapper[4835]: I0201 08:14:16.278138 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="349d14c0bc9d9924879e2d4fc7825fdf82caa24a1557f44f57c7f333660b2196" exitCode=1 Feb 01 08:14:16 crc kubenswrapper[4835]: I0201 08:14:16.278191 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"92e3b7eb343697f7a86cff05bff0645c131fbdc7c17b30a33276c9b06af1b9f9"} Feb 01 08:14:16 crc kubenswrapper[4835]: I0201 08:14:16.278219 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"b052928791e9742ded6680dfb933f1856c4646e6a4dc384cde46d5e3fe778e46"} Feb 01 08:14:16 crc kubenswrapper[4835]: I0201 08:14:16.278233 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"349d14c0bc9d9924879e2d4fc7825fdf82caa24a1557f44f57c7f333660b2196"} Feb 01 08:14:16 crc kubenswrapper[4835]: I0201 08:14:16.278255 4835 scope.go:117] "RemoveContainer" containerID="6f1a304f8cf6f337a3481cc037c018e2ca67c8da694b8266a2ce2af47a2cd825" Feb 01 08:14:16 crc kubenswrapper[4835]: I0201 08:14:16.279048 4835 scope.go:117] "RemoveContainer" containerID="349d14c0bc9d9924879e2d4fc7825fdf82caa24a1557f44f57c7f333660b2196" Feb 01 08:14:16 crc kubenswrapper[4835]: I0201 08:14:16.279183 4835 scope.go:117] "RemoveContainer" containerID="f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0" Feb 01 08:14:16 crc kubenswrapper[4835]: I0201 08:14:16.279256 4835 scope.go:117] "RemoveContainer" containerID="3b1bb3af0e5732f220334b3cd370553b1ddcc245875cfa3539320ae4bb4a8f28" Feb 01 08:14:16 crc kubenswrapper[4835]: E0201 08:14:16.279668 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:14:17 crc kubenswrapper[4835]: I0201 08:14:17.292927 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="92e3b7eb343697f7a86cff05bff0645c131fbdc7c17b30a33276c9b06af1b9f9" exitCode=1 Feb 01 08:14:17 crc kubenswrapper[4835]: I0201 08:14:17.292965 4835 generic.go:334] "Generic (PLEG): container finished" podID="559d52a7-a172-4c3c-aa13-ba07036485e1" containerID="b052928791e9742ded6680dfb933f1856c4646e6a4dc384cde46d5e3fe778e46" exitCode=1 Feb 01 08:14:17 crc kubenswrapper[4835]: I0201 08:14:17.292986 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"92e3b7eb343697f7a86cff05bff0645c131fbdc7c17b30a33276c9b06af1b9f9"} Feb 01 08:14:17 crc kubenswrapper[4835]: I0201 08:14:17.293019 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerDied","Data":"b052928791e9742ded6680dfb933f1856c4646e6a4dc384cde46d5e3fe778e46"} Feb 01 08:14:17 crc kubenswrapper[4835]: I0201 08:14:17.293043 4835 scope.go:117] "RemoveContainer" containerID="9e3af5c375d91b4234037f0287b217ea171263f8f9d9c65d6ff3f4867a66ca09" Feb 01 08:14:17 crc kubenswrapper[4835]: I0201 08:14:17.293680 4835 scope.go:117] "RemoveContainer" containerID="349d14c0bc9d9924879e2d4fc7825fdf82caa24a1557f44f57c7f333660b2196" Feb 01 08:14:17 crc kubenswrapper[4835]: I0201 08:14:17.293746 4835 scope.go:117] "RemoveContainer" containerID="b052928791e9742ded6680dfb933f1856c4646e6a4dc384cde46d5e3fe778e46" Feb 01 08:14:17 crc kubenswrapper[4835]: I0201 08:14:17.293786 4835 scope.go:117] "RemoveContainer" containerID="f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0" Feb 01 08:14:17 crc kubenswrapper[4835]: I0201 08:14:17.293844 4835 scope.go:117] "RemoveContainer" containerID="3b1bb3af0e5732f220334b3cd370553b1ddcc245875cfa3539320ae4bb4a8f28" Feb 01 08:14:17 crc kubenswrapper[4835]: I0201 08:14:17.293854 4835 scope.go:117] "RemoveContainer" containerID="92e3b7eb343697f7a86cff05bff0645c131fbdc7c17b30a33276c9b06af1b9f9" Feb 01 08:14:17 crc kubenswrapper[4835]: E0201 08:14:17.294249 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:14:17 crc kubenswrapper[4835]: I0201 08:14:17.336156 4835 scope.go:117] "RemoveContainer" containerID="00e4247184998bf457f11c45646ac29bec4d69301672399dc31a3b0dcadfaf63" Feb 01 08:14:18 crc kubenswrapper[4835]: I0201 08:14:18.313163 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="82222831abee73ef6e11850e6eb3e04c17234ab7afe7bc2f282c29b15fca97d1" exitCode=1 Feb 01 08:14:18 crc kubenswrapper[4835]: I0201 08:14:18.313236 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"82222831abee73ef6e11850e6eb3e04c17234ab7afe7bc2f282c29b15fca97d1"} Feb 01 08:14:18 crc kubenswrapper[4835]: I0201 08:14:18.313741 4835 scope.go:117] "RemoveContainer" containerID="989717bbba5b6b4ae4b0d1d4f7a61748b7c6f589ae51889c79db71e2de187f8e" Feb 01 08:14:18 crc kubenswrapper[4835]: I0201 08:14:18.314887 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:14:18 crc kubenswrapper[4835]: I0201 08:14:18.315122 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:14:18 crc kubenswrapper[4835]: I0201 08:14:18.315185 4835 scope.go:117] "RemoveContainer" containerID="82222831abee73ef6e11850e6eb3e04c17234ab7afe7bc2f282c29b15fca97d1" Feb 01 08:14:18 crc kubenswrapper[4835]: I0201 08:14:18.315326 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:14:18 crc kubenswrapper[4835]: E0201 08:14:18.316036 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:14:18 crc kubenswrapper[4835]: I0201 08:14:18.327818 4835 scope.go:117] "RemoveContainer" containerID="349d14c0bc9d9924879e2d4fc7825fdf82caa24a1557f44f57c7f333660b2196" Feb 01 08:14:18 crc kubenswrapper[4835]: I0201 08:14:18.327945 4835 scope.go:117] "RemoveContainer" containerID="b052928791e9742ded6680dfb933f1856c4646e6a4dc384cde46d5e3fe778e46" Feb 01 08:14:18 crc kubenswrapper[4835]: I0201 08:14:18.327988 4835 scope.go:117] "RemoveContainer" containerID="f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0" Feb 01 08:14:18 crc kubenswrapper[4835]: I0201 08:14:18.328080 4835 scope.go:117] "RemoveContainer" containerID="3b1bb3af0e5732f220334b3cd370553b1ddcc245875cfa3539320ae4bb4a8f28" Feb 01 08:14:18 crc kubenswrapper[4835]: I0201 08:14:18.328092 4835 scope.go:117] "RemoveContainer" containerID="92e3b7eb343697f7a86cff05bff0645c131fbdc7c17b30a33276c9b06af1b9f9" Feb 01 08:14:18 crc kubenswrapper[4835]: E0201 08:14:18.328674 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:14:19 crc kubenswrapper[4835]: I0201 08:14:19.567240 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:14:19 crc kubenswrapper[4835]: I0201 08:14:19.567282 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:14:19 crc kubenswrapper[4835]: E0201 08:14:19.567697 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:14:25 crc kubenswrapper[4835]: I0201 08:14:25.567642 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:14:25 crc kubenswrapper[4835]: I0201 08:14:25.568477 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:14:25 crc kubenswrapper[4835]: I0201 08:14:25.568568 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:14:25 crc kubenswrapper[4835]: I0201 08:14:25.568725 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:14:25 crc kubenswrapper[4835]: I0201 08:14:25.568795 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:14:25 crc kubenswrapper[4835]: E0201 08:14:25.569366 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:14:27 crc kubenswrapper[4835]: I0201 08:14:27.575791 4835 scope.go:117] "RemoveContainer" containerID="ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11" Feb 01 08:14:27 crc kubenswrapper[4835]: I0201 08:14:27.575839 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:14:27 crc kubenswrapper[4835]: E0201 08:14:27.576202 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:14:28 crc kubenswrapper[4835]: I0201 08:14:28.566318 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:14:28 crc kubenswrapper[4835]: E0201 08:14:28.566932 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:14:29 crc kubenswrapper[4835]: I0201 08:14:29.568104 4835 scope.go:117] "RemoveContainer" containerID="349d14c0bc9d9924879e2d4fc7825fdf82caa24a1557f44f57c7f333660b2196" Feb 01 08:14:29 crc kubenswrapper[4835]: I0201 08:14:29.568181 4835 scope.go:117] "RemoveContainer" containerID="b052928791e9742ded6680dfb933f1856c4646e6a4dc384cde46d5e3fe778e46" Feb 01 08:14:29 crc kubenswrapper[4835]: I0201 08:14:29.568206 4835 scope.go:117] "RemoveContainer" containerID="f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0" Feb 01 08:14:29 crc kubenswrapper[4835]: I0201 08:14:29.568257 4835 scope.go:117] "RemoveContainer" containerID="3b1bb3af0e5732f220334b3cd370553b1ddcc245875cfa3539320ae4bb4a8f28" Feb 01 08:14:29 crc kubenswrapper[4835]: I0201 08:14:29.568263 4835 scope.go:117] "RemoveContainer" containerID="92e3b7eb343697f7a86cff05bff0645c131fbdc7c17b30a33276c9b06af1b9f9" Feb 01 08:14:29 crc kubenswrapper[4835]: E0201 08:14:29.568610 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 40s restarting failed container=object-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:14:31 crc kubenswrapper[4835]: I0201 08:14:31.568987 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:14:31 crc kubenswrapper[4835]: I0201 08:14:31.569342 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:14:31 crc kubenswrapper[4835]: E0201 08:14:31.569912 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:14:33 crc kubenswrapper[4835]: I0201 08:14:33.567675 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:14:33 crc kubenswrapper[4835]: I0201 08:14:33.568295 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:14:33 crc kubenswrapper[4835]: I0201 08:14:33.568349 4835 scope.go:117] "RemoveContainer" containerID="82222831abee73ef6e11850e6eb3e04c17234ab7afe7bc2f282c29b15fca97d1" Feb 01 08:14:33 crc kubenswrapper[4835]: I0201 08:14:33.568534 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:14:34 crc kubenswrapper[4835]: E0201 08:14:34.084295 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:14:34 crc kubenswrapper[4835]: I0201 08:14:34.510336 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="82af775abdc23c8025e4e12506f4fc3d3f06dcc9f90861bdc6638a928f4dae09" exitCode=1 Feb 01 08:14:34 crc kubenswrapper[4835]: I0201 08:14:34.510368 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="7138874afd09789f5daadf9e71d0e0638e55d591b511edfee4ca6f574127ecbb" exitCode=1 Feb 01 08:14:34 crc kubenswrapper[4835]: I0201 08:14:34.510386 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerStarted","Data":"6e565122ef462e611013566b06126639f064fbcfd638c2a4f4e7ea64feaa1587"} Feb 01 08:14:34 crc kubenswrapper[4835]: I0201 08:14:34.510456 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"82af775abdc23c8025e4e12506f4fc3d3f06dcc9f90861bdc6638a928f4dae09"} Feb 01 08:14:34 crc kubenswrapper[4835]: I0201 08:14:34.510476 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"7138874afd09789f5daadf9e71d0e0638e55d591b511edfee4ca6f574127ecbb"} Feb 01 08:14:34 crc kubenswrapper[4835]: I0201 08:14:34.510499 4835 scope.go:117] "RemoveContainer" containerID="29b7ce3af30880f7ecb8f62c88c6a4c1a1f8c4ed4096d54a6537054c4c4690df" Feb 01 08:14:34 crc kubenswrapper[4835]: I0201 08:14:34.515918 4835 scope.go:117] "RemoveContainer" containerID="7138874afd09789f5daadf9e71d0e0638e55d591b511edfee4ca6f574127ecbb" Feb 01 08:14:34 crc kubenswrapper[4835]: I0201 08:14:34.516017 4835 scope.go:117] "RemoveContainer" containerID="82af775abdc23c8025e4e12506f4fc3d3f06dcc9f90861bdc6638a928f4dae09" Feb 01 08:14:34 crc kubenswrapper[4835]: I0201 08:14:34.516044 4835 scope.go:117] "RemoveContainer" containerID="82222831abee73ef6e11850e6eb3e04c17234ab7afe7bc2f282c29b15fca97d1" Feb 01 08:14:34 crc kubenswrapper[4835]: E0201 08:14:34.516628 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:14:34 crc kubenswrapper[4835]: I0201 08:14:34.575705 4835 scope.go:117] "RemoveContainer" containerID="325bca52c08ed42940c6e4a23d4688b27fb5ddf25ac7d841b2c6cab74186c766" Feb 01 08:14:35 crc kubenswrapper[4835]: I0201 08:14:35.533743 4835 generic.go:334] "Generic (PLEG): container finished" podID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" containerID="6e565122ef462e611013566b06126639f064fbcfd638c2a4f4e7ea64feaa1587" exitCode=1 Feb 01 08:14:35 crc kubenswrapper[4835]: I0201 08:14:35.533783 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef","Type":"ContainerDied","Data":"6e565122ef462e611013566b06126639f064fbcfd638c2a4f4e7ea64feaa1587"} Feb 01 08:14:35 crc kubenswrapper[4835]: I0201 08:14:35.533826 4835 scope.go:117] "RemoveContainer" containerID="4148c05d3be6e90c08a761e12bddf34ac10d3f8df249995dda8baf647a976eb3" Feb 01 08:14:35 crc kubenswrapper[4835]: I0201 08:14:35.534599 4835 scope.go:117] "RemoveContainer" containerID="7138874afd09789f5daadf9e71d0e0638e55d591b511edfee4ca6f574127ecbb" Feb 01 08:14:35 crc kubenswrapper[4835]: I0201 08:14:35.534682 4835 scope.go:117] "RemoveContainer" containerID="82af775abdc23c8025e4e12506f4fc3d3f06dcc9f90861bdc6638a928f4dae09" Feb 01 08:14:35 crc kubenswrapper[4835]: I0201 08:14:35.534710 4835 scope.go:117] "RemoveContainer" containerID="82222831abee73ef6e11850e6eb3e04c17234ab7afe7bc2f282c29b15fca97d1" Feb 01 08:14:35 crc kubenswrapper[4835]: I0201 08:14:35.534786 4835 scope.go:117] "RemoveContainer" containerID="6e565122ef462e611013566b06126639f064fbcfd638c2a4f4e7ea64feaa1587" Feb 01 08:14:35 crc kubenswrapper[4835]: E0201 08:14:35.535300 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:14:38 crc kubenswrapper[4835]: I0201 08:14:38.567280 4835 scope.go:117] "RemoveContainer" containerID="ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11" Feb 01 08:14:38 crc kubenswrapper[4835]: I0201 08:14:38.567605 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:14:38 crc kubenswrapper[4835]: I0201 08:14:38.567752 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:14:38 crc kubenswrapper[4835]: I0201 08:14:38.567835 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:14:38 crc kubenswrapper[4835]: I0201 08:14:38.567867 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:14:38 crc kubenswrapper[4835]: E0201 08:14:38.567900 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:14:38 crc kubenswrapper[4835]: I0201 08:14:38.567971 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:14:38 crc kubenswrapper[4835]: I0201 08:14:38.568023 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:14:38 crc kubenswrapper[4835]: E0201 08:14:38.568568 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:14:39 crc kubenswrapper[4835]: I0201 08:14:39.568108 4835 scope.go:117] "RemoveContainer" containerID="5a9377cb856ccf7081fea35b22fdca8abaecb964e76ae79047b5708d14fc83df" Feb 01 08:14:39 crc kubenswrapper[4835]: E0201 08:14:39.568456 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wdt78_openshift-machine-config-operator(303c450e-4b2d-4908-84e6-df8b444ed640)\"" pod="openshift-machine-config-operator/machine-config-daemon-wdt78" podUID="303c450e-4b2d-4908-84e6-df8b444ed640" Feb 01 08:14:40 crc kubenswrapper[4835]: I0201 08:14:40.595625 4835 generic.go:334] "Generic (PLEG): container finished" podID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" containerID="0497b6fa825fe5c685a142a45b83cba6c78cee875feeb8c8d363023fb9cbab30" exitCode=1 Feb 01 08:14:40 crc kubenswrapper[4835]: I0201 08:14:40.595732 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"f2e2f8e4-eb90-4d97-8796-8f5d196577ce","Type":"ContainerDied","Data":"0497b6fa825fe5c685a142a45b83cba6c78cee875feeb8c8d363023fb9cbab30"} Feb 01 08:14:40 crc kubenswrapper[4835]: I0201 08:14:40.596040 4835 scope.go:117] "RemoveContainer" containerID="1ad619d8372499dd217f6f17d094fe911a5fb27dd5f2746a1688f8ec84be5ddf" Feb 01 08:14:40 crc kubenswrapper[4835]: I0201 08:14:40.597345 4835 scope.go:117] "RemoveContainer" containerID="ac9718227fda7b566c42d5651655d2a5f41536e3348f2d523e1006743398c1ab" Feb 01 08:14:40 crc kubenswrapper[4835]: I0201 08:14:40.597546 4835 scope.go:117] "RemoveContainer" containerID="6c2eda9ec63c66f8d05483b52157731dd577a2b42913bf716b4b0a8c616ebdfb" Feb 01 08:14:40 crc kubenswrapper[4835]: I0201 08:14:40.597608 4835 scope.go:117] "RemoveContainer" containerID="ed25c895b25eade3c816e34fbe868da8e46dec7aa5657dbd3fb29c5ee3d39f3e" Feb 01 08:14:40 crc kubenswrapper[4835]: I0201 08:14:40.597742 4835 scope.go:117] "RemoveContainer" containerID="0497b6fa825fe5c685a142a45b83cba6c78cee875feeb8c8d363023fb9cbab30" Feb 01 08:14:40 crc kubenswrapper[4835]: I0201 08:14:40.597790 4835 scope.go:117] "RemoveContainer" containerID="345bea7f4f881cc86342b09a036ef1c7d31aa2d5678014c858a3514cc941d456" Feb 01 08:14:40 crc kubenswrapper[4835]: I0201 08:14:40.597888 4835 scope.go:117] "RemoveContainer" containerID="5fd8631c275e74b9daf0e26ad124eb403c1bb9e6f270df35bbb9a56b904bab07" Feb 01 08:14:40 crc kubenswrapper[4835]: E0201 08:14:40.598687 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-updater\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-updater pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\", failed to \"StartContainer\" for \"container-sharder\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-sharder pod=swift-storage-0_swift-kuttl-tests(f2e2f8e4-eb90-4d97-8796-8f5d196577ce)\"]" pod="swift-kuttl-tests/swift-storage-0" podUID="f2e2f8e4-eb90-4d97-8796-8f5d196577ce" Feb 01 08:14:41 crc kubenswrapper[4835]: I0201 08:14:41.568027 4835 scope.go:117] "RemoveContainer" containerID="349d14c0bc9d9924879e2d4fc7825fdf82caa24a1557f44f57c7f333660b2196" Feb 01 08:14:41 crc kubenswrapper[4835]: I0201 08:14:41.568112 4835 scope.go:117] "RemoveContainer" containerID="b052928791e9742ded6680dfb933f1856c4646e6a4dc384cde46d5e3fe778e46" Feb 01 08:14:41 crc kubenswrapper[4835]: I0201 08:14:41.568139 4835 scope.go:117] "RemoveContainer" containerID="f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0" Feb 01 08:14:41 crc kubenswrapper[4835]: I0201 08:14:41.568194 4835 scope.go:117] "RemoveContainer" containerID="3b1bb3af0e5732f220334b3cd370553b1ddcc245875cfa3539320ae4bb4a8f28" Feb 01 08:14:41 crc kubenswrapper[4835]: I0201 08:14:41.568202 4835 scope.go:117] "RemoveContainer" containerID="92e3b7eb343697f7a86cff05bff0645c131fbdc7c17b30a33276c9b06af1b9f9" Feb 01 08:14:41 crc kubenswrapper[4835]: E0201 08:14:41.763892 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:14:42 crc kubenswrapper[4835]: I0201 08:14:42.635391 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"559d52a7-a172-4c3c-aa13-ba07036485e1","Type":"ContainerStarted","Data":"1a7388e9a033acf55cdc32c414808c3f5eb1860ae550b0ea7c76774f61add823"} Feb 01 08:14:42 crc kubenswrapper[4835]: I0201 08:14:42.636100 4835 scope.go:117] "RemoveContainer" containerID="349d14c0bc9d9924879e2d4fc7825fdf82caa24a1557f44f57c7f333660b2196" Feb 01 08:14:42 crc kubenswrapper[4835]: I0201 08:14:42.636166 4835 scope.go:117] "RemoveContainer" containerID="b052928791e9742ded6680dfb933f1856c4646e6a4dc384cde46d5e3fe778e46" Feb 01 08:14:42 crc kubenswrapper[4835]: I0201 08:14:42.636187 4835 scope.go:117] "RemoveContainer" containerID="f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0" Feb 01 08:14:42 crc kubenswrapper[4835]: I0201 08:14:42.636243 4835 scope.go:117] "RemoveContainer" containerID="92e3b7eb343697f7a86cff05bff0645c131fbdc7c17b30a33276c9b06af1b9f9" Feb 01 08:14:42 crc kubenswrapper[4835]: E0201 08:14:42.636560 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1" Feb 01 08:14:45 crc kubenswrapper[4835]: I0201 08:14:45.567308 4835 scope.go:117] "RemoveContainer" containerID="88ec643f39795cdb2c6a1d7746e26a125fe8e430ef3bc3de351739f8febd6395" Feb 01 08:14:45 crc kubenswrapper[4835]: I0201 08:14:45.567747 4835 scope.go:117] "RemoveContainer" containerID="46435f91f9ad040cb96f09344e72ba38862875f474ef0b4d260ba49016ebc716" Feb 01 08:14:45 crc kubenswrapper[4835]: E0201 08:14:45.568046 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-6c7f677bc9-lq29p_swift-kuttl-tests(0449d2d9-ddcc-4eaa-84b1-9095448105f5)\"]" pod="swift-kuttl-tests/swift-proxy-6c7f677bc9-lq29p" podUID="0449d2d9-ddcc-4eaa-84b1-9095448105f5" Feb 01 08:14:50 crc kubenswrapper[4835]: I0201 08:14:50.567166 4835 scope.go:117] "RemoveContainer" containerID="7138874afd09789f5daadf9e71d0e0638e55d591b511edfee4ca6f574127ecbb" Feb 01 08:14:50 crc kubenswrapper[4835]: I0201 08:14:50.567650 4835 scope.go:117] "RemoveContainer" containerID="82af775abdc23c8025e4e12506f4fc3d3f06dcc9f90861bdc6638a928f4dae09" Feb 01 08:14:50 crc kubenswrapper[4835]: I0201 08:14:50.567697 4835 scope.go:117] "RemoveContainer" containerID="82222831abee73ef6e11850e6eb3e04c17234ab7afe7bc2f282c29b15fca97d1" Feb 01 08:14:50 crc kubenswrapper[4835]: I0201 08:14:50.567818 4835 scope.go:117] "RemoveContainer" containerID="6e565122ef462e611013566b06126639f064fbcfd638c2a4f4e7ea64feaa1587" Feb 01 08:14:50 crc kubenswrapper[4835]: E0201 08:14:50.568379 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=container-updater pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-2_swift-kuttl-tests(69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef)\"]" pod="swift-kuttl-tests/swift-storage-2" podUID="69f0354b-0c3b-4bc5-8aeb-0ac1b59ff0ef" Feb 01 08:14:53 crc kubenswrapper[4835]: I0201 08:14:53.567403 4835 scope.go:117] "RemoveContainer" containerID="ed6b1dfc28a96c0dadb454b87a4e055f69f6d045a6bbbad22ef3fb1f7e4a7c11" Feb 01 08:14:53 crc kubenswrapper[4835]: I0201 08:14:53.568004 4835 scope.go:117] "RemoveContainer" containerID="9fda13af388ede50a2edd56288f39110ed974c5185cd4478649e289e6840de92" Feb 01 08:14:53 crc kubenswrapper[4835]: I0201 08:14:53.568061 4835 scope.go:117] "RemoveContainer" containerID="349d14c0bc9d9924879e2d4fc7825fdf82caa24a1557f44f57c7f333660b2196" Feb 01 08:14:53 crc kubenswrapper[4835]: I0201 08:14:53.568208 4835 scope.go:117] "RemoveContainer" containerID="b052928791e9742ded6680dfb933f1856c4646e6a4dc384cde46d5e3fe778e46" Feb 01 08:14:53 crc kubenswrapper[4835]: I0201 08:14:53.568272 4835 scope.go:117] "RemoveContainer" containerID="f1142147cb411e230e5da406d988f9cd54e2f8963f921132b0509ae02c48bee0" Feb 01 08:14:53 crc kubenswrapper[4835]: E0201 08:14:53.568281 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"proxy-httpd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-httpd pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\", failed to \"StartContainer\" for \"proxy-server\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=proxy-server pod=swift-proxy-7d8cf99555-6vq9r_swift-kuttl-tests(8ccb8908-ffc6-4032-8907-da7491bf9304)\"]" pod="swift-kuttl-tests/swift-proxy-7d8cf99555-6vq9r" podUID="8ccb8908-ffc6-4032-8907-da7491bf9304" Feb 01 08:14:53 crc kubenswrapper[4835]: I0201 08:14:53.568461 4835 scope.go:117] "RemoveContainer" containerID="92e3b7eb343697f7a86cff05bff0645c131fbdc7c17b30a33276c9b06af1b9f9" Feb 01 08:14:53 crc kubenswrapper[4835]: E0201 08:14:53.569019 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=account-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-replicator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=container-replicator pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"container-updater\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=container-updater pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\", failed to \"StartContainer\" for \"object-expirer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=object-expirer pod=swift-storage-1_swift-kuttl-tests(559d52a7-a172-4c3c-aa13-ba07036485e1)\"]" pod="swift-kuttl-tests/swift-storage-1" podUID="559d52a7-a172-4c3c-aa13-ba07036485e1"